diff --git a/deploy-manage/autoscaling/autoscaling-in-eck.md b/deploy-manage/autoscaling/autoscaling-in-eck.md index e9bdefce9..4fc6c577a 100644 --- a/deploy-manage/autoscaling/autoscaling-in-eck.md +++ b/deploy-manage/autoscaling/autoscaling-in-eck.md @@ -222,7 +222,7 @@ kubectl get elasticsearchautoscaler autoscaling-sample \ -o jsonpath='{ .status.conditions }' | jq ``` -```json +```json subs=true [ { "lastTransitionTime": "2022-09-09T08:07:10Z", diff --git a/deploy-manage/cloud-organization/billing/billing-models.md b/deploy-manage/cloud-organization/billing/billing-models.md index a259a3c9f..6802ba52c 100644 --- a/deploy-manage/cloud-organization/billing/billing-models.md +++ b/deploy-manage/cloud-organization/billing/billing-models.md @@ -71,13 +71,13 @@ Based on these four key concepts, the prepaid consumption lifecycle is as follow 1. You purchase credits expressed in ECU, typically at a discount. 2. You begin using {{ecloud}} resources. 3. At every billing cycle (which takes place on the first of each month), the previous month's usage, expressed in ECU, is deducted from your ECU balance. -4. If your ECU balance is depleted before the credit expiration date, you are invoiced for on-demand usage in arrears at list price. On-demand usage is expressed in ECU, and is converted to currency amounts for invoicing purposes.¹ +4. If your ECU balance is depleted before the credit expiration date, you are invoiced for on-demand usage in arrears at list price. On-demand usage is expressed in ECU, and is converted to currency amounts for invoicing purposes.[^1^](#footnote-1) 5. At the end of the contract period, any credits remaining in your balance are forfeited. -6. During the contract period, you can purchase additional credits at any time (as an add-on). This can be done with the same discount as the original purchase. Credits purchased through an add-on have the same expiration as the original purchase.² +6. During the contract period, you can purchase additional credits at any time (as an add-on). This can be done with the same discount as the original purchase. Credits purchased through an add-on have the same expiration as the original purchase.[^2^](#footnote-2) -¹ When you renew your contract or commit to a multi-year contract, any on-demand usage incurred in the years other than the last are billed with the same discount as the original purchase. +^1^ $$$footnote-1$$$ When you renew your contract or commit to a multi-year contract, any on-demand usage incurred in the years other than the last are billed with the same discount as the original purchase. -² Purchasing credits through early renewals, or through add-ons with different expiration dates will be available in the near future. +^2^ $$$footnote-2$$$ Purchasing credits through early renewals, or through add-ons with different expiration dates will be available in the near future. ::::{note} Existing annual+overages customers will be able to switch to prepaid consumption when they renew or sign a new contract. Existing manual burndown customers will be migrated gradually to prepaid consumption in the near future. Exceptions apply. diff --git a/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md b/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md index 6a303798e..b2a5e7f93 100644 --- a/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md +++ b/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md @@ -30,7 +30,7 @@ There are a number of fields that need to be added to each {{es}} node in order The following example is based on the `default` system owned deployment template that already supports `node_roles`. This template will be used as a reference for the next sections: -::::{dropdown} Reference example with support for `node_roles` +::::{dropdown} Reference example with support for node_roles :name: ece-node-roles-support-example ```json @@ -363,7 +363,7 @@ It is recommended to add the `id` field to each {{es}} topology element in the d The existing template contains three {{es}} topology elements and two resources (`elasticsearch` and `kibana`). -::::{dropdown} Custom example without support for `node_roles` +::::{dropdown} Custom example without support for node_roles ```json { ... @@ -471,7 +471,7 @@ Then, it is only necessary to add the four {{es}} topology elements (`warm`, `co After adding support for `node_roles`, the resulting deployment template should look similar to the following: -::::{dropdown} Custom example with support for `node_roles` +::::{dropdown} Custom example with support for node_roles :name: example-with-support-for-node-roles ```json @@ -772,7 +772,7 @@ These fields represent the default settings for the deployment. However, autosca Similar to the `node_roles` example, the following one is also based on the `default` deployment template that already supports `node_roles` and autoscaling. This template will be used as a reference for the next sections: -::::{dropdown} Reference example with support for `node_roles` and autoscaling +::::{dropdown} Reference example with support for node_roles and autoscaling ```json { ... @@ -1102,7 +1102,7 @@ To update a custom deployment template: After adding support for autoscaling to the [example](#ece-node-roles-support-example) presented in the previous section, the resulting deployment template should look similar to the following: -::::{dropdown} Custom example with support for `node_roles` and autoscaling +::::{dropdown} Custom example with support for node_roles and autoscaling ```json { ... @@ -1780,7 +1780,7 @@ After the migration plan has finished, we recommend following the [Migrate index The following is an example of a deployment plan that does not contain `node_roles`: -::::{dropdown} Example deployment plan with `node_type` +::::{dropdown} Example deployment plan with node_type ```json { "name": "Example deployment", @@ -1924,7 +1924,7 @@ The following is an example of a deployment plan that does not contain `node_rol After adding support for `node_roles` to the example deployment plan, the resulting plan should look similar to the following: -::::{dropdown} Example deployment plan with `node_roles` +::::{dropdown} Example deployment plan with node_roles ```json { "name": "Example deployment", diff --git a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-no-registry.md b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-no-registry.md index d4abfb047..f199ac253 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-no-registry.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-no-registry.md @@ -27,7 +27,7 @@ To perform an offline installation without a private Docker registry, you have t For example, for {{ece}} 4.0.0 and the {{stack}} versions it shipped with, you need: * {{ece}} 4.0.0 - * {es} 9.0.0, {{kib}} 9.0.0, and APM 9.0.0 + * {{es}} 9.0.0, {{kib}} 9.0.0, and APM 9.0.0 2. Create .tar files of the images: diff --git a/deploy-manage/deploy/cloud-enterprise/enable-custom-endpoint-aliases.md b/deploy-manage/deploy/cloud-enterprise/enable-custom-endpoint-aliases.md index 40d80680d..1e518c459 100644 --- a/deploy-manage/deploy/cloud-enterprise/enable-custom-endpoint-aliases.md +++ b/deploy-manage/deploy/cloud-enterprise/enable-custom-endpoint-aliases.md @@ -10,7 +10,7 @@ mapped_pages: Custom endpoint aliases allow users to replace the UUID for each application with a human readable string. Platform administrators must enable this feature to allow deployment managers to create and modify aliases for their deployments. -::::{note} +::::{note} You need to update your proxy certificates to support this feature. :::: @@ -20,16 +20,16 @@ After installing or upgrading to version 2.10 or later: 1. [Login to the Cloud UI](log-into-cloud-ui.md) 2. [Update your proxy certificate(s)](../../security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md). In addition to currently configured domains, additional SAN entries must be configured for each application-specific subdomain: - ::::{note} + ::::{note} If you are not using wildcard certificates, you need to repeat this process for each deployment to account for specific aliases. :::: - * For {{es}}, the certificate needs to allow for ***.es.** - * For {{kib}}, the certificate needs to allow for ***.kb.** - * For APM, the certificate needs to allow for ***.apm.** - * For Fleet, the certificate needs to allow for ***.fleet.** - * For Universal Profiling, the certificate needs to allow for ***.profiling.** and ***.symbols.** + * For {{es}}, the certificate needs to allow for **\*.es.** + * For {{kib}}, the certificate needs to allow for **\*.kb.** + * For APM, the certificate needs to allow for **\*.apm.** + * For Fleet, the certificate needs to allow for **\*.fleet.** + * For Universal Profiling, the certificate needs to allow for **\*.profiling.** and **\*.symbols.** 3. In the **Platform** menu, select **Settings**. 4. Under the **Enable custom endpoint alias naming**, toggle the setting to allow platform administrators and deployment managers to choose a simplified, unique URL for the endpoint. diff --git a/deploy-manage/deploy/cloud-enterprise/manage-elastic-stack-versions.md b/deploy-manage/deploy/cloud-enterprise/manage-elastic-stack-versions.md index fc6a82063..7c609467f 100644 --- a/deploy-manage/deploy/cloud-enterprise/manage-elastic-stack-versions.md +++ b/deploy-manage/deploy/cloud-enterprise/manage-elastic-stack-versions.md @@ -45,7 +45,7 @@ $$$ece-elastic-stack-stackpacks-recent$$$ Following is the full list of available packs containing {{stack}} versions. Note that Enterprise Search was introduced with ECE 2.6.0 and requires that version or higher. -::::{dropdown} **Expand to view the full list** +::::{dropdown} Expand to view the full list | Required downloads | Minimum required ECE version | | --- | --- | | [{{es}}, {{kib}}, and APM stack pack: 9.0.0](https://download.elastic.co/cloud-enterprise/versions/9.0.0.zip) | ECE 4.0.0 | diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md index f7cbaec81..fb2736877 100644 --- a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md @@ -17,7 +17,11 @@ This section covers the following topics: ## Use APM Agent central configuration [k8s-apm-agent-central-configuration] -[APM Agent configuration management](/solutions/observability/apm/apm-agent-central-configuration.md) [7.5.1] allows you to configure your APM Agents centrally from the {{kib}} APM app. To use this feature, the APM Server needs to be configured with connection details of the {{kib}} instance. If {{kib}} is managed by ECK, you can simply add a `kibanaRef` attribute to the APM Server specification: +:::{admonition} Added in 7.5.1 +APM Agent central configuration was added in 7.5.1. +::: + +[APM Agent configuration management](/solutions/observability/apm/apm-agent-central-configuration.md) allows you to configure your APM Agents centrally from the {{kib}} APM app. To use this feature, the APM Server needs to be configured with connection details of the {{kib}} instance. If {{kib}} is managed by ECK, you can simply add a `kibanaRef` attribute to the APM Server specification: ```yaml cat <
* **Endpoint Protection Essentials**: endpoint protections with {{elastic-defend}}.
* **Cloud Protection Essentials**: Cloud native security features.
| -| **Security Analytics Complete** | Everything in **Security Analytics Essentials*** plus advanced features such as entity analytics, threat intelligence, and more. Allows these add-ons:

* **Endpoint Protection Complete**: Everything in **Endpoint Protection Essentials** plus advanced endpoint detection and response features.
* **Cloud Protection Complete**: Everything in **Cloud Protection Essentials** plus advanced cloud security features.
| +| **Security Analytics Complete** | Everything in **Security Analytics Essentials** plus advanced features such as entity analytics, threat intelligence, and more. Allows these add-ons:

* **Endpoint Protection Complete**: Everything in **Endpoint Protection Essentials** plus advanced endpoint detection and response features.
* **Cloud Protection Complete**: Everything in **Cloud Protection Essentials** plus advanced cloud security features.
| ### Downgrading the feature tier [elasticsearch-manage-project-downgrading-the-feature-tier] diff --git a/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-voting.md b/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-voting.md index d0fd94a63..442c76042 100644 --- a/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-voting.md +++ b/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-voting.md @@ -58,7 +58,7 @@ It is important that the bootstrap configuration identifies exactly which nodes If the bootstrap configuration is not set correctly, when you start a brand-new cluster there is a risk that you will accidentally form two separate clusters instead of one. This situation can lead to data loss: you might start using both clusters before you notice that anything has gone wrong and it is impossible to merge them together later. ::::{note} -To illustrate the problem with configuring each node to expect a certain cluster size, imagine starting up a three-node cluster in which each node knows that it is going to be part of a three-node cluster. A majority of three nodes is two, so normally the first two nodes to discover each other form a cluster and the third node joins them a short time later. However, imagine that four nodes were erroneously started instead of three. In this case, there are enough nodes to form two separate clusters. Of course if each node is started manually then it’s unlikely that too many nodes are started. If you’re using an automated orchestrator, however, it’s certainly possible to get into this situation-- particularly if the orchestrator is not resilient to failures such as network partitions. +To illustrate the problem with configuring each node to expect a certain cluster size, imagine starting up a three-node cluster in which each node knows that it is going to be part of a three-node cluster. A majority of three nodes is two, so normally the first two nodes to discover each other form a cluster and the third node joins them a short time later. However, imagine that four nodes were erroneously started instead of three. In this case, there are enough nodes to form two separate clusters. Of course if each node is started manually then it’s unlikely that too many nodes are started. If you’re using an automated orchestrator, however, it’s certainly possible to get into this situation—particularly if the orchestrator is not resilient to failures such as network partitions. :::: The initial quorum is only required the very first time a whole cluster starts up. New nodes joining an established cluster can safely obtain all the information they need from the elected master. Nodes that have previously been part of a cluster will have stored to disk all the information that is required when they restart. diff --git a/deploy-manage/monitor/autoops/ec-autoops-faq.md b/deploy-manage/monitor/autoops/ec-autoops-faq.md index 29df3ac34..5e26cfd27 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-faq.md +++ b/deploy-manage/monitor/autoops/ec-autoops-faq.md @@ -20,7 +20,7 @@ $$$faq-autoops-monitoring$$$Does AutoOps monitor the entire {{stack}}? : AutoOps is currently limited to {{es}} (not {{kib}}, Logstash and Beats). $$$faq-autoops-supported-versions$$$What versions of {{es}} are supported for {{ech}}? -: AutoOps supports {es} versions according to the [supported {{stack}} versions](https://www.elastic.co/support/eol). +: AutoOps supports {{es}} versions according to the [supported {{stack}} versions](https://www.elastic.co/support/eol). $$$faq-autoops-license$$$How is AutoOps currently licensed? : AutoOps current feature set is available to {{ech}} customers at all subscription tiers. For more information refer to the [subscription page](https://www.elastic.co/subscriptions/cloud). diff --git a/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md b/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md index 0a703fce5..cf5faa758 100644 --- a/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md +++ b/deploy-manage/monitor/logging-configuration/kib-advanced-logging.md @@ -3,7 +3,7 @@ mapped_pages: - https://www.elastic.co/guide/en/kibana/current/_cli_configuration.html applies_to: deployment: - self: + self: ece: eck: --- @@ -18,7 +18,7 @@ If you are planning to ingest your logs using {{es}} or another tool, we recomme You can't configure these settings in an {{ech}} deployment. ::: -The {{kib}} logging system has three main components: *loggers*, *appenders* and *layouts*. +The {{kib}} logging system has three main components: *loggers*, *appenders* and *layouts*. * **Loggers** define what logging settings should be applied to a particular logger. * [Appenders](#logging-appenders) define where log messages are displayed (for example, stdout or console) and stored (for example, file on the disk). @@ -70,7 +70,7 @@ The following conversions are provided out of the box: * **message**: Outputs the application supplied message associated with the logging event. -* **meta**: Outputs the entries of `meta` object data in ***json** format, if one is present in the event. Example of `%meta` output: +* **meta**: Outputs the entries of `meta` object data in **json** format, if one is present in the event. Example of `%meta` output: ```bash // Meta{from: 'v7', to: 'v8'} @@ -391,7 +391,7 @@ logging: level: debug ``` -## Logging configuration using the CLI [logging-cli-migration] +## Logging configuration using the CLI [logging-cli-migration] You can specify your logging configuration using the CLI. For convenience, the `--verbose` and `--silent` flags exist as shortcuts and will continue to be supported beyond v7. diff --git a/deploy-manage/remote-clusters/ec-remote-cluster-self-managed.md b/deploy-manage/remote-clusters/ec-remote-cluster-self-managed.md index c1350ab8e..a48a989c1 100644 --- a/deploy-manage/remote-clusters/ec-remote-cluster-self-managed.md +++ b/deploy-manage/remote-clusters/ec-remote-cluster-self-managed.md @@ -132,7 +132,7 @@ A deployment can be configured to trust all or specific deployments in any envir Trust management will not work properly in clusters without an `otherName` value specified, as is the case by default in an out-of-the-box [{{es}} installation](../deploy/self-managed/installing-elasticsearch.md). To have the {{es}} certutil generate new certificates with the `otherName` attribute, use the file input with the `cn` attribute as in the example below. :::: -5. . Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s **Security** page. +5. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s **Security** page. 6. Select **Create trust** to complete the configuration. 7. Configure the self-managed cluster to trust this deployment, so that both deployments are configured to trust each other: diff --git a/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md b/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md index 30bb3b3ab..f4174ae3b 100644 --- a/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md +++ b/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md @@ -24,12 +24,12 @@ Using a customer-managed key allows you to strengthen the security of your deplo Using a customer-managed key helps protect against threats related to the management and control of encryption keys. It does not directly protect against any specific types of attacks or threats. However, the ability to keep control over your own keys can help mitigate certain types of threats such as: -* **Insider threats.** By using a customer-managed key, Elastic does not have access to your encryption keys [1]. This can help prevent unauthorized access to data by insiders with malicious intent. +* **Insider threats.** By using a customer-managed key, Elastic does not have access to your encryption keys [^1^](#footnote-1). This can help prevent unauthorized access to data by insiders with malicious intent. * **Compromised physical infrastructure.** If a data center is physically compromised, the hosts are shut off. With customer-managed key encryption, that’s a second layer of protection that any malicious intruder would have to bypass, in addition to the existing built-in hardware encryption. Using a customer-managed key can help comply with regulations or security requirements, but it is not a complete security solution by itself. There are other types of threats that it does not protect against. -[1] You set up your customer-managed keys and their access in your key management service. When you provide a customer-managed key identifier to {{ecloud}}, we do not access or store the cryptographic material associated with that key. Customer-managed keys are not directly used to encrypt deployment or snapshot data. {{ecloud}} accesses your customer-managed keys to encrypt and decrypt data encryption keys, which, in turn, are used to encrypt the data. +^1^ $$$footnote-1$$$ You set up your customer-managed keys and their access in your key management service. When you provide a customer-managed key identifier to {{ecloud}}, we do not access or store the cryptographic material associated with that key. Customer-managed keys are not directly used to encrypt deployment or snapshot data. {{ecloud}} accesses your customer-managed keys to encrypt and decrypt data encryption keys, which, in turn, are used to encrypt the data. When a deployment encrypted with a customer-managed key is deleted or terminated, its data is locked first before being deleted, ensuring a fully secure deletion process. @@ -427,15 +427,15 @@ When {{ecloud}} can’t reach the encryption key, your deployment may become ina Within 30 minutes maximum, {{ecloud}} locks the directories in which your deployment data live and prompts you to delete your deployment as an increased security measure.
- While it is locked, the deployment retains all data but is not readable or writable*: + While it is locked, the deployment retains all data but is not readable or writable[^2^](#footnote-2): * If access to the key is never restored, the deployment data does not become accessible again * When restoring access to the key, the deployment becomes operational again: * If Elastic didn’t have to perform any platform operations on your instances during the locked period, operations are restored with minimum downtime. - * If Elastic performed some platform operations on your instances during the locked period, restoring operations can require some downtime. It’s also possible that some data can’t be restored** depending on the available snapshots. + * If Elastic performed some platform operations on your instances during the locked period, restoring operations can require some downtime. It’s also possible that some data can’t be restored[^3^](#footnote-3) depending on the available snapshots. -**During the locked directory period, Elastic may need to perform platform operations on the machines hosting your instances that result in data loss on the {{es}} data nodes but not the deployment snapshots.* +^2^ $$$footnote-2$$$ During the locked directory period, Elastic may need to perform platform operations on the machines hosting your instances that result in data loss on the {{es}} data nodes but not the deployment snapshots. -***Elastic recommends that you keep snapshots of your deployment in custom snapshot repositories in your own CSP account for data recovery purposes.* +^3^ $$$footnote-3$$$ Elastic recommends that you keep snapshots of your deployment in custom snapshot repositories in your own CSP account for data recovery purposes. diff --git a/deploy-manage/security/k8s-network-policies.md b/deploy-manage/security/k8s-network-policies.md index 8d0476dc7..e8391aca7 100644 --- a/deploy-manage/security/k8s-network-policies.md +++ b/deploy-manage/security/k8s-network-policies.md @@ -196,7 +196,7 @@ spec: podSelector: matchLabels: common.k8s.elastic.co/type: elasticsearch - # [Optional] Restrict to a single {es} cluster named hulk. + # [Optional] Restrict to a single Elasticsearch cluster named hulk. # elasticsearch.k8s.elastic.co/cluster-name=hulk - ports: - port: 53 diff --git a/deploy-manage/tools/snapshot-and-restore.md b/deploy-manage/tools/snapshot-and-restore.md index 3e05f9868..ceb997133 100644 --- a/deploy-manage/tools/snapshot-and-restore.md +++ b/deploy-manage/tools/snapshot-and-restore.md @@ -141,14 +141,14 @@ Any index you restore from a snapshot must also be compatible with the current c | Index creation version | 6.8 | 7.0–7.1 | 7.2–7.17 | 8.0–8.2 | 8.3–8.17 | |------------------------|-----|---------|---------|---------|---------| -| 5.0–5.6 | ✅ | ❌ | ❌ | ❌ | ✅ [1] | -| 6.0–6.7 | ✅ | ✅ | ✅ | ❌ | ✅ [1] | -| 6.8 | ✅ | ❌ | ✅ | ❌ | ✅ [1] | +| 5.0–5.6 | ✅ | ❌ | ❌ | ❌ | ✅ [^1^](#footnote-1) | +| 6.0–6.7 | ✅ | ✅ | ✅ | ❌ | ✅ [^1^](#footnote-1) | +| 6.8 | ✅ | ❌ | ✅ | ❌ | ✅ [^1^](#footnote-1) | | 7.0–7.1 | ❌ | ✅ | ✅ | ✅ | ✅ | | 7.2–7.17 | ❌ | ❌ | ✅ | ✅ | ✅ | | 8.0–8.17 | ❌ | ❌ | ❌ | ✅ | ✅ | -[¹] Supported with [archive indices](/deploy-manage/upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md). +^1^ $$$footnote-1$$$ Supported with [archive indices](/deploy-manage/upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md). You can’t restore an index to an earlier version of {{es}}. For example, you can’t restore an index created in 7.6.0 to a cluster running 7.5.0. diff --git a/deploy-manage/tools/snapshot-and-restore/create-snapshots.md b/deploy-manage/tools/snapshot-and-restore/create-snapshots.md index 9071f8f24..4321898e9 100644 --- a/deploy-manage/tools/snapshot-and-restore/create-snapshots.md +++ b/deploy-manage/tools/snapshot-and-restore/create-snapshots.md @@ -111,7 +111,7 @@ POST _security/role/slm-read-only ### Create an {{slm-init}} policy [create-slm-policy] -To manage {{slm-init}} in {{kib}}, go to the main menu and click **Stack Management** > **Snapshot and Restore*** > ***Policies**. To create a policy, click **Create policy**. +To manage {{slm-init}} in {{kib}}, go to the main menu and click **Stack Management** > **Snapshot and Restore** > **Policies**. To create a policy, click **Create policy**. You can also manage {{slm-init}} using the [{{slm-init}} APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-slm). To create a policy, use the [create {{slm-init}} policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-slm-put-lifecycle). diff --git a/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md b/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md index 7688ee9d9..66ad4dc1e 100644 --- a/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md +++ b/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md @@ -60,7 +60,7 @@ In **{{ech}}**, snapshot repositories are automatically registered for you, but * {{kib}}'s **Snapshot and Restore** feature * {{es}}'s [snapshot repository management APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-snapshot) -To manage repositories in {{kib}}, go to the main menu and click **Stack Management** > **Snapshot and Restore*** > ***Repositories**. To register a snapshot repository, click **Register repository**. +To manage repositories in {{kib}}, go to the main menu and click **Stack Management** > **Snapshot and Restore** > **Repositories**. To register a snapshot repository, click **Register repository**. You can also register a repository using the [Create snapshot repository API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-create-repository). diff --git a/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md b/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md index a71759c9a..4914e5685 100644 --- a/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md @@ -213,7 +213,11 @@ The following settings are supported: `application_name` -: [6.3.0] Name used by the client when it uses the Google Cloud Storage service. +: :::{admonition} Deprecated in 6.3.0 + This setting was deprecated in 6.3.0. + ::: + + Name used by the client when it uses the Google Cloud Storage service. ### Recommended bucket permission [repository-gcs-bucket-permission] diff --git a/deploy-manage/tools/snapshot-and-restore/self-managed.md b/deploy-manage/tools/snapshot-and-restore/self-managed.md index b1d4f67f2..3be812f43 100644 --- a/deploy-manage/tools/snapshot-and-restore/self-managed.md +++ b/deploy-manage/tools/snapshot-and-restore/self-managed.md @@ -42,7 +42,7 @@ You can register and manage snapshot repositories in two ways: * {{kib}}'s **Snapshot and Restore** feature * {{es}}'s [snapshot repository management APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-snapshot) -To manage repositories in {{kib}}, go to the main menu and click **Stack Management** > **Snapshot and Restore*** > ***Repositories**. To register a snapshot repository, click **Register repository**. +To manage repositories in {{kib}}, go to the main menu and click **Stack Management** > **Snapshot and Restore** > **Repositories**. To register a snapshot repository, click **Register repository**. You can also register a repository using the [Create snapshot repository API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-create-repository). diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md b/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md index 6742daca1..33f81e332 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md @@ -81,7 +81,11 @@ To learn how to assign privileges to a role, refer to [](/deploy-manage/users-ro `manage_data_frame_transforms` -: All operations related to managing {{transforms}}. [7.5] Use `manage_transform` instead. +: All operations related to managing {{transforms}}. + + :::{admonition} Deprecated in 7.5 + Use `manage_transform` instead. + ::: This privilege is not available in {{serverless-full}}. @@ -165,8 +169,9 @@ To learn how to assign privileges to a role, refer to [](/deploy-manage/users-ro This privilege is not available in {{serverless-full}}. - [8.15] Also grants the permission to start and stop {{Ilm}}, using the [ILM start](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-start) and [ILM stop](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-stop) APIs. In a future major release, this privilege will not grant any {{Ilm}} permissions. - + :::{admonition} Deprecated in 8.15 + Also grants the permission to start and stop {{Ilm}}, using the [ILM start](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-start) and [ILM stop](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-stop) APIs. In a future major release, this privilege will not grant any {{Ilm}} permissions. + ::: `manage_token` : All security-related operations on tokens that are generated by the {{es}} Token Service. @@ -255,8 +260,9 @@ To learn how to assign privileges to a role, refer to [](/deploy-manage/users-ro This privilege is not available in {{serverless-full}}. - [8.15] Also grants the permission to get the {{Ilm}} status, using the [ILM get status API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-get-status). In a future major release, this privilege will not grant any {{Ilm}} permissions. - + :::{admonition} Deprecated in 8.15 + Also grants the permission to get the {{Ilm}} status, using the [ILM get status API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-get-status). In a future major release, this privilege will not grant any {{Ilm}} permissions. + ::: `read_security` : All read-only security-related operations, such as getting users, user profiles, {{es}} API keys, {{es}} service accounts, roles and role mappings. Allows [querying](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-query-api-keys) and [retrieving information](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-api-key) on all {{es}} API keys. @@ -279,7 +285,9 @@ To learn how to assign privileges to a role, refer to [](/deploy-manage/users-ro `create` : Privilege to index documents. - [8.0] Also grants the permission to update the index mapping (but not the data streams mapping), using the [updating mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) or by relying on [dynamic field mapping](../../../manage-data/data-store/mapping/dynamic-mapping.md). In a future major release, this privilege will not grant any mapping update permissions. + :::{admonition} Deprecated in 8.0 + Also grants the permission to update the index mapping (but not the data streams mapping), using the [updating mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) or by relying on [dynamic field mapping](/manage-data/data-store/mapping/dynamic-mapping.md). In a future major release, this privilege will not grant any mapping update permissions. + ::: ::::{note} This privilege does not restrict the index operation to the creation of documents but instead restricts API use to the index API. The index API allows a user to overwrite a previously indexed document. See the `create_doc` privilege for an alternative. @@ -289,7 +297,9 @@ To learn how to assign privileges to a role, refer to [](/deploy-manage/users-ro `create_doc` : Privilege to index documents. It does not grant the permission to update or overwrite existing documents. - [8.0] Also grants the permission to update the index mapping (but not the data streams mapping), using the [updating mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) or by relying on [dynamic field mapping](../../../manage-data/data-store/mapping/dynamic-mapping.md). In a future major release, this privilege will not grant any mapping update permissions. + :::{admonition} Deprecated in 8.0 + Also grants the permission to update the index mapping (but not the data streams mapping), using the [updating mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) or by relying on [dynamic field mapping](/manage-data/data-store/mapping/dynamic-mapping.md). In a future major release, this privilege will not grant any mapping update permissions. + ::: ::::{note} This privilege relies on the `op_type` of indexing requests ([Index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create) and [Bulk](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk)). When ingesting documents as a user who has the `create_doc` privilege (and no higher privilege such as `index` or `write`), you must ensure that *op_type* is set to *create* through one of the following: @@ -329,8 +339,9 @@ To learn how to assign privileges to a role, refer to [](/deploy-manage/users-ro `index` : Privilege to index and update documents. - [8.0] Also grants the permission to update the index mapping (but not the data streams mapping), using the [updating mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) or by relying on [dynamic field mapping](../../../manage-data/data-store/mapping/dynamic-mapping.md). In a future major release, this privilege will not grant any mapping update permissions. - + :::{admonition} Deprecated in 8.0 + Also grants the permission to update the index mapping (but not the data streams mapping), using the [updating mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) or by relying on [dynamic field mapping](../../../manage-data/data-store/mapping/dynamic-mapping.md). In a future major release, this privilege will not grant any mapping update permissions. + ::: `maintenance` : Permits refresh, flush, synced flush and force merge index administration operations. No privilege to read or write index data or otherwise manage the index. @@ -377,8 +388,9 @@ To learn how to assign privileges to a role, refer to [](/deploy-manage/users-ro `write` : Privilege to perform all write operations to documents, which includes the permission to index, update, and delete documents as well as performing bulk operations, while also allowing to dynamically update the index mapping. - [8.0] It also grants the permission to update the index mapping (but not the data streams mapping), using the [updating mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping). This will be retracted in a future major release. - + :::{admonition} Deprecated in 8.0 + It also grants the permission to update the index mapping (but not the data streams mapping), using the [updating mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping). This will be retracted in a future major release. + ::: ## Run as privilege [_run_as_privilege] diff --git a/docset.yml b/docset.yml index e5053d35e..e9409df0c 100644 --- a/docset.yml +++ b/docset.yml @@ -82,7 +82,7 @@ subs: es-serverless: "Elasticsearch Serverless" obs-serverless: "Elastic Observability Serverless" sec-serverless: "Elastic Security Serverless" - ess-leadin-short: "Our hosted Elasticsearch Service is available on AWS, GCP, and Azure, and you can https://cloud.elastic.co/registration{ess-utm-params}[try it for free]." + ess-leadin-short: "Our hosted Elasticsearch Service is available on AWS, GCP, and Azure, and you can try it for free: https://cloud.elastic.co/registration." apm-app: "APM app" uptime-app: "Uptime app" synthetics-app: "Synthetics app" diff --git a/explore-analyze/alerts-cases/alerts/alerting-troubleshooting.md b/explore-analyze/alerts-cases/alerts/alerting-troubleshooting.md index d59508cd1..d5421794b 100644 --- a/explore-analyze/alerts-cases/alerts/alerting-troubleshooting.md +++ b/explore-analyze/alerts-cases/alerts/alerting-troubleshooting.md @@ -191,7 +191,7 @@ This approach should be used only temporarily as a last resort to restore functi ## Limitations [alerting-limitations] -The following limitations and known problems apply to the 9.0.0-beta1 release of the {{kib}} {{alert-features}}: +The following limitations and known problems apply to the {{version}} release of the {{kib}} {{alert-features}}: ### Alert visibility [_alert_visibility] diff --git a/explore-analyze/alerts-cases/alerts/rule-action-variables.md b/explore-analyze/alerts-cases/alerts/rule-action-variables.md index d2702f4d9..2a8fe5c47 100644 --- a/explore-analyze/alerts-cases/alerts/rule-action-variables.md +++ b/explore-analyze/alerts-cases/alerts/rule-action-variables.md @@ -64,7 +64,8 @@ If the rule’s action frequency is a summary of alerts, it passes the following `alerts.all.data` : An array of objects for all alerts. The following object properties are examples; it is not a comprehensive list. - ::::{dropdown} Properties of the alerts.all.data objects + **Properties of the alerts.all.data objects**: + `kibana.alert.end` : Datetime stamp of alert end. [preview] @@ -83,15 +84,14 @@ If the rule’s action frequency is a summary of alerts, it passes the following `kibana.alert.status` : Alert status (for example, active or OK). [preview] - :::: - `alerts.new.count` : The count of new alerts. `alerts.new.data` : An array of objects for new alerts. The following object properties are examples; it is not a comprehensive list. - ::::{dropdown} Properties of the alerts.new.data objects + **Properties of the alerts.new.data objects**: + `kibana.alert.end` : Datetime stamp of alert end. [preview] @@ -110,15 +110,14 @@ If the rule’s action frequency is a summary of alerts, it passes the following `kibana.alert.status` : Alert status (for example, active or OK). [preview] - :::: - `alerts.ongoing.count` : The count of ongoing alerts. `alerts.ongoing.data` : An array of objects for ongoing alerts. The following object properties are examples; it is not a comprehensive list. - ::::{dropdown} Properties of the alerts.ongoing.data objects + **Properties of the alerts.ongoing.data objects**: + `kibana.alert.end` : Datetime stamp of alert end. [preview] @@ -137,15 +136,14 @@ If the rule’s action frequency is a summary of alerts, it passes the following `kibana.alert.status` : Alert status (for example, active or OK). [preview] - :::: - `alerts.recovered.count` : The count of recovered alerts. `alerts.recovered.data` : An array of objects for recovered alerts. The following object properties are examples; it is not a comprehensive list. - ::::{dropdown} Properties of the alerts.recovered.data objects + **Properties of the alerts.recovered.data objects**: + `kibana.alert.end` : Datetime stamp of alert end. [preview] @@ -164,8 +162,6 @@ If the rule’s action frequency is a summary of alerts, it passes the following `kibana.alert.status` : Alert status (for example, active or OK). [preview] - :::: - ### Action frequency: For each alert [alert-action-variables] If the rule’s action frequency is not a summary of alerts, it passes the following variables: diff --git a/explore-analyze/alerts-cases/alerts/rule-type-es-query.md b/explore-analyze/alerts-cases/alerts/rule-type-es-query.md index 2a2db3343..66949a629 100644 --- a/explore-analyze/alerts-cases/alerts/rule-type-es-query.md +++ b/explore-analyze/alerts-cases/alerts/rule-type-es-query.md @@ -36,7 +36,13 @@ When you create an {{es}} query rule, your choice of query type affects the info If you use [KQL](../../query-filter/languages/kql.md) or [Lucene](../../query-filter/languages/lucene-query-syntax.md), you must specify a data view then define a text-based query. For example, `http.request.referrer: "https://example.com"`. - If you use [ES|QL](../../query-filter/languages/esql.md), you must provide a source command followed by an optional series of processing commands, separated by pipe characters (|). [8.16.0] For example: + If you use [ES|QL](../../query-filter/languages/esql.md), you must provide a source command followed by an optional series of processing commands, separated by pipe characters (|). + + :::{admonition} Added in 8.16.0 + This functionality was added in 8.16.0. + ::: + + For example: ```sh FROM kibana_sample_data_logs diff --git a/explore-analyze/alerts-cases/cases/manage-cases-settings.md b/explore-analyze/alerts-cases/cases/manage-cases-settings.md index d8b2bc8e5..8163f60ac 100644 --- a/explore-analyze/alerts-cases/cases/manage-cases-settings.md +++ b/explore-analyze/alerts-cases/cases/manage-cases-settings.md @@ -51,7 +51,11 @@ To update a connector, click **Update ** and edit the connector ## Custom fields [case-custom-fields] -You can add optional and required fields for customized case collaboration. [8.15.0] +:::{admonition} Added in 8.15.0 +This functionality was added in 8.15.0. +::: + +You can add optional and required fields for customized case collaboration. To create a custom field: diff --git a/explore-analyze/alerts-cases/cases/manage-cases.md b/explore-analyze/alerts-cases/cases/manage-cases.md index 3ace7a170..202fab32f 100644 --- a/explore-analyze/alerts-cases/cases/manage-cases.md +++ b/explore-analyze/alerts-cases/cases/manage-cases.md @@ -27,7 +27,12 @@ Open a new case to keep track of issues and share their details with colleagues. :::: 4. Optionally, add a category, assignees, and tags. You can add users only if they meet the necessary [prerequisites](setup-cases.md). -5. If you defined any [custom fields](manage-cases-settings.md#case-custom-fields), they appear in the **Additional fields** section. [8.15.0] +5. If you defined any [custom fields](manage-cases-settings.md#case-custom-fields), they appear in the **Additional fields** section. + + :::{admonition} Added in 8.15.0 + This functionality was added in 8.15.0. + ::: + 6. For the **External incident management system**, select a connector. For more information, refer to [External incident management systems](manage-cases-settings.md#case-connectors). 7. After you’ve completed all of the required fields, click **Create case**. diff --git a/explore-analyze/elastic-inference/inference-api.md b/explore-analyze/elastic-inference/inference-api.md index 3edf5ef8f..e4e5b195a 100644 --- a/explore-analyze/elastic-inference/inference-api.md +++ b/explore-analyze/elastic-inference/inference-api.md @@ -56,7 +56,7 @@ For more information about adaptive allocations and resources, refer to the trai ## Default {{infer}} endpoints [default-enpoints] -Your {{es}} deployment contains preconfigured {{infer}} endpoints which makes them easier to use when defining `semantic_text` fields or using {{infer}} processors. The following list contains the default {infer} endpoints listed by `inference_id`: +Your {{es}} deployment contains preconfigured {{infer}} endpoints which makes them easier to use when defining `semantic_text` fields or using {{infer}} processors. The following list contains the default {{infer}} endpoints listed by `inference_id`: * `.elser-2-elasticsearch`: uses the [ELSER](../../explore-analyze/machine-learning/nlp/ml-nlp-elser.md) built-in trained model for `sparse_embedding` tasks (recommended for English language tex). The `model_id` is `.elser_model_2_linux-x86_64`. * `.multilingual-e5-small-elasticsearch`: uses the [E5](../../explore-analyze/machine-learning/nlp/ml-nlp-e5.md) built-in trained model for `text_embedding` tasks (recommended for non-English language texts). The `model_id` is `.e5_model_2_linux-x86_64`. diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md b/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md index 686e5d9f7..010898756 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md @@ -129,7 +129,9 @@ One way to update the roles that are stored within the {{dfeed}} without changin :::: -If the data that you want to analyze is not stored in {{es}}, you cannot use {{dfeeds}}. You can however send batches of data directly to the job by using the [post data to jobs API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-post-data). [7.11.0] +:::{admonition} Deprecated in 7.11.0 +If the data that you want to analyze is not stored in {{es}}, you cannot use {{dfeeds}}. You can however send batches of data directly to the job by using the [post data to jobs API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-post-data). +::: ## Open the job [ml-ad-open-job] diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-limitations.md b/explore-analyze/machine-learning/anomaly-detection/ml-limitations.md index 4c41dfdfe..f9ff90d0e 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-limitations.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-limitations.md @@ -9,7 +9,7 @@ mapped_pages: # Limitations [ml-limitations] -The following limitations and known problems apply to the 9.0.0-beta1 release of the Elastic {{ml-features}}. The limitations are grouped into four categories: +The following limitations and known problems apply to the {{version}} release of the Elastic {{ml-features}}. The limitations are grouped into four categories: * [Platform limitations](#ad-platform-limitations) are related to the platform that hosts the {{ml}} feature of the {{stack}}. * [Configuration limitations](#ad-config-limitations) apply to the configuration process of the {{anomaly-jobs}}. diff --git a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-limitations.md b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-limitations.md index 1663cbd74..f5f26da76 100644 --- a/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-limitations.md +++ b/explore-analyze/machine-learning/data-frame-analytics/ml-dfa-limitations.md @@ -9,7 +9,7 @@ mapped_pages: # Limitations [ml-dfa-limitations] -The following limitations and known problems apply to the 9.0.0-beta1 release of the Elastic {{dfanalytics}} feature. The limitations are grouped into the following categories: +The following limitations and known problems apply to the {{version}} release of the Elastic {{dfanalytics}} feature. The limitations are grouped into the following categories: * [Platform limitations](#dfa-platform-limitations) are related to the platform that hosts the {{ml}} feature of the {{stack}}. * [Configuration limitations](#dfa-config-limitations) apply to the configuration process of the {{dfanalytics-jobs}}. diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md b/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md index 2ea05ff7b..3f03a0eef 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md @@ -43,7 +43,7 @@ Trained models must be in a TorchScript representation for use with {{stack-ml-f 3. Specify the identifier for the model in the Hugging Face model hub. 4. Specify the type of NLP task. Supported values are `fill_mask`, `ner`, `question_answering`, `text_classification`, `text_embedding`, `text_expansion`, `text_similarity`, and `zero_shot_classification`. -For more details, refer to [](eland://reference/machine-learning.md#ml-nlp-pytorch). +For more details, refer to the [Eland documentation](eland://reference/machine-learning.md#ml-nlp-pytorch). ## Import with Docker [ml-nlp-import-docker] diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-limitations.md b/explore-analyze/machine-learning/nlp/ml-nlp-limitations.md index ea18f093d..c2bb25741 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-limitations.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-limitations.md @@ -8,7 +8,7 @@ mapped_pages: # Limitations [ml-nlp-limitations] -The following limitations and known problems apply to the 9.0.0-beta1 release of the Elastic {{nlp}} trained models feature. +The following limitations and known problems apply to the {{version}} release of the Elastic {{nlp}} trained models feature. ## Document size limitations when using `semantic_text` fields [ml-nlp-large-documents-limit-10k-10mb] diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md index 2add42f12..3ef74a35b 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md @@ -58,7 +58,7 @@ Deployed models can be evaluated in {{kib}} under **{{ml-app}}** > **Trained Mod :screenshot: ::: -::::{dropdown} **Test the model by using the _infer API** +::::{dropdown} Test the model by using the _infer API You can also evaluate your models by using the [_infer API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-infer-trained-model). In the following request, `text_field` is the field name where the model expects to find the input, as defined in the model configuration. By default, if the model was uploaded via Eland, the input field is `text_field`. ```js diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md index bcff02651..deaaa2f72 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md @@ -62,7 +62,7 @@ Deployed models can be evaluated in {{kib}} under **{{ml-app}}** > **Trained Mod :screenshot: ::: -::::{dropdown} **Test the model by using the _infer API** +::::{dropdown} Test the model by using the _infer API You can also evaluate your models by using the [_infer API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-infer-trained-model). In the following request, `text_field` is the field name where the model expects to find the input, as defined in the model configuration. By default, if the model was uploaded via Eland, the input field is `text_field`. ```js diff --git a/explore-analyze/query-filter.md b/explore-analyze/query-filter.md index b8d470442..26ecaa371 100644 --- a/explore-analyze/query-filter.md +++ b/explore-analyze/query-filter.md @@ -11,8 +11,8 @@ mapped_pages: You can use {{es}} as a basic document store to retrieve documents and their metadata. However, the real power of {{es}} comes from its advanced search and analytics capabilities. * **{{es}} makes JSON documents searchable and aggregatable.** The documents are stored in an [index](/manage-data/data-store/index-basics.md) or [data stream](/manage-data/data-store/data-streams.md), which represent one type of data. -* ***Searchable* means that you can filter the documents for conditions.** For example, you can filter for data "within the last 7 days" or data that "contains the word {{kib}}". {{kib}} provides many ways for you to construct filters, which are also called queries or search terms. -* ***Aggregatable* means that you can extract summaries from matching documents.** The simplest aggregation is **count**, and it is frequently used in combination with the **date histogram**, to see count over time. The **terms** aggregation shows the most frequent values. +* **Searchable** means that you can filter the documents for conditions.** For example, you can filter for data "within the last 7 days" or data that "contains the word {{kib}}". {{kib}} provides many ways for you to construct filters, which are also called queries or search terms. +* **Aggregatable** means that you can extract summaries from matching documents.** The simplest aggregation is **count**, and it is frequently used in combination with the **date histogram**, to see count over time. The **terms** aggregation shows the most frequent values. ## Querying diff --git a/explore-analyze/query-filter/languages/sql-jdbc.md b/explore-analyze/query-filter/languages/sql-jdbc.md index 5a91554d9..67d947911 100644 --- a/explore-analyze/query-filter/languages/sql-jdbc.md +++ b/explore-analyze/query-filter/languages/sql-jdbc.md @@ -11,7 +11,7 @@ mapped_pages: {{es}}'s SQL jdbc driver is a rich, fully featured JDBC driver for {{es}}. It is Type 4 driver, meaning it is a platform independent, stand-alone, Direct to Database, pure Java driver that converts JDBC calls to Elasticsearch SQL. -## Installation [sql-jdbc-installation] +## Installation [sql-jdbc-installation] The JDBC driver can be obtained from: @@ -21,11 +21,11 @@ Dedicated page Maven dependency : [Maven](https://maven.apache.org/)-compatible tools can retrieve it automatically as a dependency: -```xml +```xml subs=true org.elasticsearch.plugin x-pack-sql-jdbc - 9.0.0-beta1 + {{version}} ``` @@ -41,12 +41,12 @@ from [Maven Central Repository](https://search.maven.org/artifact/org.elasticsea ``` -## Version compatibility [jdbc-compatibility] +## Version compatibility [jdbc-compatibility] Your driver must be compatible with your {{es}} version. -::::{important} -The driver version cannot be newer than the {{es}} version. For example, {{es}} version 7.10.0 is not compatible with 9.0.0-beta1 drivers. +::::{important} +The driver version cannot be newer than the {{es}} version. For example, {{es}} version 7.10.0 is not compatible with {{version}} drivers. :::: @@ -55,7 +55,7 @@ The driver version cannot be newer than the {{es}} version. For example, {{es}} | 7.7.0 and earlier versions | * The same version.
| {{es}} 7.6.1 is only compatible with 7.6.1 drivers. | -## Setup [jdbc-setup] +## Setup [jdbc-setup] The driver main class is `org.elasticsearch.xpack.sql.jdbc.EsDriver`. Note the driver implements the JDBC 4.0 `Service Provider` mechanism meaning it is registered automatically as long as it is available in the classpath. @@ -83,7 +83,7 @@ jdbc:[es|elasticsearch]://[[http|https]://]?[host[:port]]?/[prefix]?[\?[option=v The driver recognized the following properties: -#### Essential [jdbc-cfg] +#### Essential [jdbc-cfg] $$$jdbc-cfg-timezone$$$ @@ -91,7 +91,7 @@ $$$jdbc-cfg-timezone$$$ : Timezone used by the driver *per connection* indicated by its `ID`. **Highly** recommended to set it (to, say, `UTC`) as the JVM timezone can vary, is global for the entire JVM and can’t be changed easily when running under a security manager. -#### Network [jdbc-cfg-network] +#### Network [jdbc-cfg-network] `connect.timeout` (default `30000`) : Connection timeout (in milliseconds). That is the maximum amount of time waiting to make a connection to the server. @@ -109,7 +109,7 @@ $$$jdbc-cfg-timezone$$$ : Query timeout (in milliseconds). That is the maximum amount of time waiting for a query to return. -### Basic Authentication [jdbc-cfg-auth] +### Basic Authentication [jdbc-cfg-auth] `user` : Basic Authentication user name @@ -118,7 +118,7 @@ $$$jdbc-cfg-timezone$$$ : Basic Authentication password -### SSL [jdbc-cfg-ssl] +### SSL [jdbc-cfg-ssl] `ssl` (default `false`) : Enable SSL @@ -145,7 +145,7 @@ $$$jdbc-cfg-timezone$$$ : SSL protocol to be used -### Proxy [_proxy] +### Proxy [_proxy] `proxy.http` : Http proxy host name @@ -154,19 +154,19 @@ $$$jdbc-cfg-timezone$$$ : SOCKS proxy host name -### Mapping [_mapping] +### Mapping [_mapping] `field.multi.value.leniency` (default `true`) : Whether to be lenient and return the first value (without any guarantees of what that will be - typically the first in natural ascending order) for fields with multiple values (true) or throw an exception. -### Index [_index] +### Index [_index] `index.include.frozen` (default `false`) : Whether to include frozen indices in the query execution or not (default). -### Cluster [_cluster] +### Cluster [_cluster] `catalog` : Default catalog (cluster) for queries. If unspecified, the queries execute on the data in the local cluster only. @@ -175,13 +175,13 @@ $$$jdbc-cfg-timezone$$$ -### Error handling [_error_handling] +### Error handling [_error_handling] `allow.partial.search.results` (default `false`) : Whether to return partial results in case of shard failure or fail the query throwing the underlying exception (default). -### Troubleshooting [_troubleshooting] +### Troubleshooting [_troubleshooting] `debug` (default `false`) : Setting it to `true` will enable the debug logging. @@ -190,7 +190,7 @@ $$$jdbc-cfg-timezone$$$ : The destination of the debug logs. By default, they are sent to standard error. Value `out` will redirect the logging to standard output. A file path can also be specified. -### Additional [_additional] +### Additional [_additional] `validate.properties` (default `true`) : If disabled, it will ignore any misspellings or unrecognizable properties. When enabled, an exception will be thrown if the provided property cannot be recognized. diff --git a/explore-analyze/query-filter/languages/sql-odbc-installation.md b/explore-analyze/query-filter/languages/sql-odbc-installation.md index e56773914..49a46f4dc 100644 --- a/explore-analyze/query-filter/languages/sql-odbc-installation.md +++ b/explore-analyze/query-filter/languages/sql-odbc-installation.md @@ -42,7 +42,7 @@ When installing the MSI, the Windows Defender SmartScreen might warn about runni Your driver must be compatible with your {{es}} version. ::::{important} -The driver version cannot be newer than the {{es}} version. For example, {{es}} version 7.10.0 is not compatible with 9.0.0-beta1 drivers. +The driver version cannot be newer than the {{es}} version. For example, {{es}} version 7.10.0 is not compatible with {{version}} drivers. :::: @@ -53,7 +53,7 @@ The driver version cannot be newer than the {{es}} version. For example, {{es}} ## Download the `.msi` package(s) [download] -Download the `.msi` package for Elasticsearch SQL ODBC Driver 9.0.0-beta1 from: [https://www.elastic.co/downloads/odbc-client](https://www.elastic.co/downloads/odbc-client) +Download the `.msi` package for Elasticsearch SQL ODBC Driver {{version}} from: [https://www.elastic.co/downloads/odbc-client](https://www.elastic.co/downloads/odbc-client) There are two versions of the installer available: @@ -82,7 +82,7 @@ Clicking **Next** will present the End User License Agreement. You will need to The following screen allows you to customise the installation path for the Elasticsearch ODBC driver files. ::::{note} -The default installation path is of the format: **%ProgramFiles%\Elastic\ODBCDriver\9.0.0-beta1** +The default installation path is of the format: **%ProgramFiles%\Elastic\ODBCDriver\\{{version}}** :::: @@ -127,20 +127,20 @@ The examples given below apply to installation of the 64 bit MSI package. To ach The `.msi` can also be installed via the command line. The simplest installation using the same defaults as the GUI is achieved by first navigating to the download directory, then running: -```sh -msiexec.exe /i esodbc-9.0.0-beta1-windows-x86_64.msi /qn +```sh subs=true +msiexec.exe /i esodbc-{{version}}-windows-x86_64.msi /qn ``` By default, `msiexec.exe` does not wait for the installation process to complete, since it runs in the Windows subsystem. To wait on the process to finish and ensure that `%ERRORLEVEL%` is set accordingly, it is recommended to use `start /wait` to create a process and wait for it to exit: -```sh -start /wait msiexec.exe /i esodbc-9.0.0-beta1-windows-x86_64.msi /qn +```sh subs=true +start /wait msiexec.exe /i esodbc-{{version}}-windows-x86_64.msi /qn ``` As with any MSI installation package, a log file for the installation process can be found within the `%TEMP%` directory, with a randomly generated name adhering to the format `MSI.LOG`. The path to a log file can be supplied using the `/l` command line argument -```sh -start /wait msiexec.exe /i esodbc-9.0.0-beta1-windows-x86_64.msi /qn /l install.log +```sh subs=true +start /wait msiexec.exe /i esodbc-{{version}}-windows-x86_64.msi /qn /l install.log ``` Supported Windows Installer command line arguments can be viewed using: @@ -156,12 +156,12 @@ msiexec.exe /help All settings exposed within the GUI are also available as command line arguments (referred to as *properties* within Windows Installer documentation) that can be passed to `msiexec.exe`: `INSTALLDIR` -: The installation directory. Defaults to `%ProgramFiles%\Elastic\ODBCDriver\9.0.0-beta1`. +: The installation directory. Defaults to _%ProgramFiles%\Elastic\ODBCDriver\\{{version}}_. To pass a value, simply append the property name and value using the format `=""` to the installation command. For example, to use a different installation directory to the default one: -```sh -start /wait msiexec.exe /i esodbc-9.0.0-beta1-windows-x86_64.msi /qn INSTALLDIR="c:\CustomDirectory" +```sh subs=true +start /wait msiexec.exe /i esodbc-{{version}}-windows-x86_64.msi /qn INSTALLDIR="c:\CustomDirectory" ``` Consult the [Windows Installer SDK Command-Line Options](https://msdn.microsoft.com/en-us/library/windows/desktop/aa367988(v=vs.85).aspx) for additional rules related to values containing quotation marks. @@ -190,12 +190,12 @@ Once opened, find the Elasticsearch ODBC Driver installation within the list of Uninstallation can also be performed from the command line by navigating to the directory containing the `.msi` package and running: -```sh -start /wait msiexec.exe /x esodbc-9.0.0-beta1-windows-x86_64.msi /qn +```sh subs=true +start /wait msiexec.exe /x esodbc-{{version}}-windows-x86_64.msi /qn ``` Similar to the install process, a path for a log file for the uninstallation process can be passed using the `/l` command line argument -```sh -start /wait msiexec.exe /x esodbc-9.0.0-beta1-windows-x86_64.msi /qn /l uninstall.log +```sh subs=true +start /wait msiexec.exe /x esodbc-{{version}}-windows-x86_64.msi /qn /l uninstall.log ``` diff --git a/explore-analyze/query-filter/languages/sql-syntax-select.md b/explore-analyze/query-filter/languages/sql-syntax-select.md index cc82f352f..fa71d7f61 100644 --- a/explore-analyze/query-filter/languages/sql-syntax-select.md +++ b/explore-analyze/query-filter/languages/sql-syntax-select.md @@ -537,8 +537,9 @@ SELECT SCORE(), * FROM library WHERE MATCH(name, 'dune') ORDER BY page_count DES 1.8893257 |Frank Herbert |Dune Messiah |331 |1969-10-15T00:00:00Z ``` -NOTE: Trying to return `score` from a non full-text query will return the same value for all results, as all are equally relevant. - +:::{note} +Trying to return `score` from a non full-text query will return the same value for all results, as all are equally relevant. +::: ## LIMIT [sql-syntax-limit] diff --git a/explore-analyze/report-and-share.md b/explore-analyze/report-and-share.md index b8475dcbb..3a1708e46 100644 --- a/explore-analyze/report-and-share.md +++ b/explore-analyze/report-and-share.md @@ -91,7 +91,7 @@ In the following dashboard, the shareable container is highlighted: * If you are creating workpad PDFs, select **Full page layout** to create PDFs without margins that surround the workpad. -3. Generate the report by clicking **Export file***, ***Generate CSV***, or ***Generate PDF**, depending on the object you want to export. +3. Generate the report by clicking **Export file**, **Generate CSV**, or **Generate PDF**, depending on the object you want to export. ::::{note} You can use the **Copy POST URL** option instead to generate the report from outside Kibana or from Watcher. diff --git a/explore-analyze/report-and-share/automating-report-generation.md b/explore-analyze/report-and-share/automating-report-generation.md index 2b794c387..6b5a9f715 100644 --- a/explore-analyze/report-and-share/automating-report-generation.md +++ b/explore-analyze/report-and-share/automating-report-generation.md @@ -154,4 +154,6 @@ If you experience issues with the deprecated report URLs after you upgrade {{kib * **Visualize Library** reports: `/api/reporting/generate/visualization/` * **Discover** reports: `/api/reporting/generate/search/` -IMPORTANT: In earlier {{kib}} versions, you could use the `&sync` parameter to append to report URLs that held the request open until the document was fully generated. The `&sync` parameter is now unsupported. If you use the `&sync` parameter in Watcher, you must update the parameter. +:::{important} +In earlier {{kib}} versions, you could use the `&sync` parameter to append to report URLs that held the request open until the document was fully generated. The `&sync` parameter is now unsupported. If you use the `&sync` parameter in Watcher, you must update the parameter. +::: diff --git a/explore-analyze/scripting/modules-scripting-fields.md b/explore-analyze/scripting/modules-scripting-fields.md index 075fcb076..4d6d187e5 100644 --- a/explore-analyze/scripting/modules-scripting-fields.md +++ b/explore-analyze/scripting/modules-scripting-fields.md @@ -177,7 +177,7 @@ The `doc['field']` will throw an error if `field` is missing from the mappings. :::: -::::{admonition} Doc values and `text` fields +::::{admonition} Doc values and text fields :class: note The `doc['field']` syntax can also be used for [analyzed `text` fields](elasticsearch://reference/elasticsearch/mapping-reference/text.md) if [`fielddata`](elasticsearch://reference/elasticsearch/mapping-reference/text.md#fielddata-mapping-param) is enabled, but **BEWARE**: enabling fielddata on a `text` field requires loading all of the terms into the JVM heap, which can be very expensive both in terms of memory and CPU. It seldom makes sense to access `text` fields from scripts. @@ -277,7 +277,7 @@ GET my-index-000001/_search } ``` -::::{admonition} Stored vs `_source` +::::{admonition} Stored vs _source :class: tip The `_source` field is just a special stored field, so the performance is similar to that of other stored fields. The `_source` provides access to the original document body that was indexed (including the ability to distinguish `null` values from empty fields, single-value arrays from plain scalars, etc). diff --git a/explore-analyze/transforms/transform-limitations.md b/explore-analyze/transforms/transform-limitations.md index cb5ad43e8..b19726db9 100644 --- a/explore-analyze/transforms/transform-limitations.md +++ b/explore-analyze/transforms/transform-limitations.md @@ -9,7 +9,7 @@ mapped_pages: # Limitations [transform-limitations] -The following limitations and known problems apply to the 9.0.0-beta1 release of the Elastic {{transform}} feature. The limitations are grouped into the following categories: +The following limitations and known problems apply to the {{version}} release of the Elastic {{transform}} feature. The limitations are grouped into the following categories: * [Configuration limitations](#transform-config-limitations) apply to the configuration process of the {{transforms}}. * [Operational limitations](#transform-operational-limitations) affect the behavior of the {{transforms}} that are running. diff --git a/explore-analyze/visualize/canvas/canvas-function-reference.md b/explore-analyze/visualize/canvas/canvas-function-reference.md index bc643ad7f..03afafa1e 100644 --- a/explore-analyze/visualize/canvas/canvas-function-reference.md +++ b/explore-analyze/visualize/canvas/canvas-function-reference.md @@ -12,7 +12,7 @@ Behind the scenes, Canvas is driven by a powerful expression language, with doze The Canvas expression language also supports [TinyMath functions](canvas-tinymath-functions.md), which perform complex math calculations. -A ***** denotes a required argument. +A **\*** denotes a required argument. A † denotes an argument can be passed multiple times. @@ -58,7 +58,7 @@ This sets the color of the metric text to `"red"` if the context passed into `me | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* ***** †
Alias: `condition` | `boolean` | The conditions to check. | +| *Unnamed* **\*** †
Alias: `condition` | `boolean` | The conditions to check. | **Returns:** `boolean` @@ -91,7 +91,7 @@ This renames the `time` column to `time_in_ms` and converts the type of the colu | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `column` | `string` | The name of the column to alter. | +| *Unnamed* **\***
Alias: `column` | `string` | The name of the column to alter. | | `name` | `string` | The resultant column name. Leave blank to not rename. | | `type` | `string` | The type to convert the column to. Leave blank to not change the type. | @@ -129,7 +129,7 @@ This filters out any rows that don’t contain `"elasticsearch"`, `"kibana"` or | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* ***** †
Alias: `condition` | `boolean` | The conditions to check. | +| *Unnamed* **\*** †
Alias: `condition` | `boolean` | The conditions to check. | **Returns:** `boolean` @@ -195,7 +195,7 @@ The image asset stored with the ID `"asset-c661a7cc-11be-45a1-a401-d7592ea7917a" | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `id` | `string` | The ID of the asset to retrieve. | +| *Unnamed* **\***
Alias: `id` | `string` | The ID of the asset to retrieve. | **Returns:** `string` @@ -284,7 +284,7 @@ This sets the color of the progress indicator and the color of the label to `"gr | --- | --- | --- | | *Unnamed*
Alias: `when` | `any` | The value compared to the *context* to see if they are equal. The `when` argument is ignored when the `if` argument is also specified. | | `if` | `boolean` | This value indicates whether the condition is met. The `if` argument overrides the `when` argument when both are provided. | -| `then` ***** | `any` | The value returned if the condition is met. | +| `then` **\*** | `any` | The value returned if the condition is met. | **Returns:** `case` @@ -546,7 +546,7 @@ This creates a `datatable` with `fruit` and `stock` columns with two rows. This | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `data` | `string` | The CSV data to use. | +| *Unnamed* **\***
Alias: `data` | `string` | The CSV data to use. | | `delimiter` | `string` | The data separation character. | | `newline` | `string` | The row separation character. | @@ -666,10 +666,10 @@ This creates a dropdown filter element. It requires a data source and uses the u | Argument | Type | Description | | --- | --- | --- | -| `filterColumn` ***** | `string` | The column or field that you want to filter. | +| `filterColumn` **\*** | `string` | The column or field that you want to filter. | | `filterGroup` | `string` | The group name for the filter. | | `labelColumn` | `string` | The column or field to use as the label in the dropdown control | -| `valueColumn` ***** | `string` | The column or field from which to extract the unique values for the dropdown control. | +| `valueColumn` **\*** | `string` | The column or field from which to extract the unique values for the dropdown control. | **Returns:** `render` @@ -685,8 +685,8 @@ Returns an embeddable with the provided configuration | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `config` | `string` | The base64 encoded embeddable input object | -| `type` ***** | `string` | The embeddable type | +| *Unnamed* **\***
Alias: `config` | `string` | The base64 encoded embeddable input object | +| `type` **\*** | `string` | The embeddable type | **Returns:** `embeddable` @@ -728,7 +728,7 @@ This changes all values in the project column that don’t equal `"kibana"` or ` | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `value` | `boolean`, `number`, `string`, `null` | The value compared to the *context*. | +| *Unnamed* **\***
Alias: `value` | `boolean`, `number`, `string`, `null` | The value compared to the *context*. | **Returns:** `boolean` @@ -871,9 +871,9 @@ exactly column="project" value="beats" | Argument | Type | Description | | --- | --- | --- | -| `column` *****
Aliases: `c`, `field` | `string` | The column or field that you want to filter. | +| `column` **\***
Aliases: `c`, `field` | `string` | The column or field that you want to filter. | | `filterGroup` | `string` | The group name for the filter. | -| `value` *****
Aliases: `v`, `val` | `string` | The value to match exactly, including white space and capitalization. | +| `value` **\***
Aliases: `v`, `val` | `string` | The value to match exactly, including white space and capitalization. | **Returns:** `filter` @@ -914,7 +914,7 @@ This uses `filterrows` to only keep data from India (`IN`), the United States (` | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Aliases: `exp`, `expression`, `fn`, `function` | `boolean` | An expression to pass into each row in the `datatable`. The expression should return a `boolean`. A `true` value preserves the row, and a `false` value removes it. | +| *Unnamed* **\***
Aliases: `exp`, `expression`, `fn`, `function` | `boolean` | An expression to pass into each row in the `datatable`. The expression should return a `boolean`. A `true` value preserves the row, and a `false` value removes it. | **Returns:** `datatable` @@ -1045,7 +1045,7 @@ This transforms the dates in the `time` field into strings that look like `"Jan | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `format` | `string` | A MomentJS format. For example, `"MM/DD/YYYY"`. See [https://momentjs.com/docs/#/displaying/](https://momentjs.com/docs/#/displaying/). | +| *Unnamed* **\***
Alias: `format` | `string` | A MomentJS format. For example, `"MM/DD/YYYY"`. See [https://momentjs.com/docs/#/displaying/](https://momentjs.com/docs/#/displaying/). | **Returns:** `string` @@ -1080,7 +1080,7 @@ The `formatnumber` subexpression receives the same `context` as the `progress` f | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `format` | `string` | A Numeral pattern format string. For example, `"0.0a"` or `"0%"`. | +| *Unnamed* **\***
Alias: `format` | `string` | A Numeral pattern format string. For example, `"0.0a"` or `"0%"`. | **Returns:** `string` @@ -1110,7 +1110,7 @@ Returns whether the *context* is greater than the argument. | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `value` | `number`, `string` | The value compared to the *context*. | +| *Unnamed* **\***
Alias: `value` | `number`, `string` | The value compared to the *context*. | **Returns:** `boolean` @@ -1123,7 +1123,7 @@ Returns whether the *context* is greater or equal to the argument. | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `value` | `number`, `string` | The value compared to the *context*. | +| *Unnamed* **\***
Alias: `value` | `number`, `string` | The value compared to the *context*. | **Returns:** `boolean` @@ -1155,7 +1155,7 @@ Performs conditional logic. | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `condition` | `boolean` | A `true` or `false` indicating whether a condition is met, usually returned by a sub-expression. When unspecified, the original *context* is returned. | +| *Unnamed* **\***
Alias: `condition` | `boolean` | A `true` or `false` indicating whether a condition is met, usually returned by a sub-expression. When unspecified, the original *context* is returned. | | `else` | `any` | The return value when the condition is `false`. When unspecified and the condition is not met, the original *context* is returned. | | `then` | `any` | The return value when the condition is `true`. When unspecified and the condition is met, the original *context* is returned. | @@ -1187,7 +1187,7 @@ Concatenates values from rows in a `datatable` into a single string. | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `column` | `string` | The column or field from which to extract the values. | +| *Unnamed* **\***
Alias: `column` | `string` | The column or field from which to extract the values. | | `distinct` | `boolean` | Extract only unique values?
Default: `true` | | `quote` | `string` | The quote character to wrap around each extracted value.
Default: `"'"` | | `separator`
Aliases: `delimiter`, `sep` | `string` | The delimiter to insert between each extracted value.
Default: `","` | @@ -1227,7 +1227,7 @@ Returns whether the *context* is less than the argument. | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `value` | `number`, `string` | The value compared to the *context*. | +| *Unnamed* **\***
Alias: `value` | `number`, `string` | The value compared to the *context*. | **Returns:** `boolean` @@ -1240,7 +1240,7 @@ Returns whether the *context* is less than or equal to the argument. | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `value` | `number`, `string` | The value compared to the *context*. | +| *Unnamed* **\***
Alias: `value` | `number`, `string` | The value compared to the *context*. | **Returns:** `boolean` @@ -1256,9 +1256,9 @@ Returns an object with the center coordinates and zoom level of the map. | Argument | Type | Description | | --- | --- | --- | -| `lat` ***** | `number` | Latitude for the center of the map | -| `lon` ***** | `number` | Longitude for the center of the map | -| `zoom` ***** | `number` | Zoom level of the map | +| `lat` **\*** | `number` | Latitude for the center of the map | +| `lon` **\*** | `number` | Longitude for the center of the map | +| `zoom` **\*** | `number` | Zoom level of the map | **Returns:** `mapCenter` @@ -1271,9 +1271,9 @@ Adds a column calculated as the result of other columns. Changes are made only w | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Aliases: `column`, `name` | `string` | The name of the resulting column. Names are not required to be unique. | +| *Unnamed* **\***
Aliases: `column`, `name` | `string` | The name of the resulting column. Names are not required to be unique. | | `copyMetaFrom` | `string`, `null` | If set, the meta object from the specified column id is copied over to the specified target column. If the column doesn’t exist it silently fails.
Default: `null` | -| `expression` *****
Aliases: `exp`, `fn`, `function` | `boolean`, `number`, `string`, `null` | An expression that is executed on every row, provided with a single-row `datatable` context and returning the cell value. | +| `expression` **\***
Aliases: `exp`, `fn`, `function` | `boolean`, `number`, `string`, `null` | An expression that is executed on every row, provided with a single-row `datatable` context and returning the cell value. | | `id` | `string`, `null` | An optional id of the resulting column. When no id is provided, the id will be looked up from the existing column by the provided name argument. If no column with this name exists yet, a new column with this name and an identical id will be added to the table.
Default: `null` | **Returns:** `datatable` @@ -1281,7 +1281,11 @@ Adds a column calculated as the result of other columns. Changes are made only w ### `markdown` [markdown_fn] -Adds an element that renders Markdown text. TIP: Use the [`markdown`](#markdown_fn) function for single numbers, metrics, and paragraphs of text. +Adds an element that renders Markdown text. + +:::{tip} +Use the [`markdown`](#markdown_fn) function for single numbers, metrics, and paragraphs of text. +::: **Accepts:** `datatable`, `null` @@ -1316,11 +1320,11 @@ Adds a column by evaluating `TinyMath` on each row. This function is optimized f | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Aliases: `column`, `name` | `string` | The name of the resulting column. Names are not required to be unique. | +| *Unnamed* **\***
Aliases: `column`, `name` | `string` | The name of the resulting column. Names are not required to be unique. | | *Unnamed*
Alias: `expression` | `string` | An evaluated `TinyMath` expression. See [canvas-tinymath-functions.md](/reference/data-analysis/kibana/tinymath-functions.md). | | `castColumns` † | `string` | The column ids that are cast to numbers before the formula is applied. | | `copyMetaFrom` | `string`, `null` | If set, the meta object from the specified column id is copied over to the specified target column. If the column doesn’t exist it silently fails.
Default: `null` | -| `id` ***** | `string` | id of the resulting column. Must be unique. | +| `id` **\*** | `string` | id of the resulting column. Must be unique. | | `onError` | `string` | In case the `TinyMath` evaluation fails or returns NaN, the return value is specified by onError. When `'throw'`, it will throw an exception, terminating expression execution (default). | **Returns:** `datatable` @@ -1353,7 +1357,7 @@ Returns whether the *context* is not equal to the argument. | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `value` | `boolean`, `number`, `string`, `null` | The value compared to the *context*. | +| *Unnamed* **\***
Alias: `value` | `boolean`, `number`, `string`, `null` | The value compared to the *context*. | **Returns:** `boolean` @@ -1650,7 +1654,7 @@ Adds a column with the same static value in every row. See also [`alterColumn`]( | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Aliases: `column`, `name` | `string` | The name of the new column. | +| *Unnamed* **\***
Aliases: `column`, `name` | `string` | The name of the new column. | | `value` | `string`, `number`, `boolean`, `null` | The value to insert in each row in the new column. TIP: use a sub-expression to rollup other columns into a static value.
Default: `null` | **Returns:** `datatable` @@ -1677,7 +1681,7 @@ Performs conditional logic with multiple conditions. See also [`case`](#case_fn) | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* ***** †
Alias: `case` | `case` | The conditions to check. | +| *Unnamed* **\*** †
Alias: `case` | `case` | The conditions to check. | | `default`
Alias: `finally` | `any` | The value returned when no conditions are met. When unspecified and no conditions are met, the original *context* is returned. | **Returns:** Depends on your input and arguments @@ -1771,8 +1775,8 @@ An object that represents a span of time. | Argument | Type | Description | | --- | --- | --- | -| `from` ***** | `string` | The start of the time range | -| `to` ***** | `string` | The end of the time range | +| `from` **\*** | `string` | The start of the time range | +| `to` **\*** | `string` | The end of the time range | **Returns:** `timerange` @@ -1801,7 +1805,7 @@ Returns a UI settings parameter value. | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `parameter` | `string` | The parameter name. | +| *Unnamed* **\***
Alias: `parameter` | `string` | The parameter name. | | `default` | `any` | A default value in case of the parameter is not set. | **Returns:** Depends on your input and arguments @@ -1815,7 +1819,7 @@ Retrieves a URL parameter to use in an expression. The [`urlparam`](#urlparam_fn | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Aliases: `param`, `var`, `variable` | `string` | The URL hash parameter to retrieve. | +| *Unnamed* **\***
Aliases: `param`, `var`, `variable` | `string` | The URL hash parameter to retrieve. | | `default` | `string` | The string returned when the URL parameter is unspecified.
Default: `""` | **Returns:** `string` @@ -1832,7 +1836,7 @@ Updates the Kibana global context. | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* *****
Alias: `name` | `string` | Specify the name of the variable. | +| *Unnamed* **\***
Alias: `name` | `string` | Specify the name of the variable. | **Returns:** Depends on your input and arguments @@ -1845,7 +1849,7 @@ Updates the Kibana global context. | Argument | Type | Description | | --- | --- | --- | -| *Unnamed* ***** †
Alias: `name` | `string` | Specify the name of the variable. | +| *Unnamed* **\*** †
Alias: `name` | `string` | Specify the name of the variable. | | `value` †
Alias: `val` | `any` | Specify the value for the variable. When unspecified, the input context is used. | **Returns:** Depends on your input and arguments diff --git a/explore-analyze/visualize/maps/maps-connect-to-ems.md b/explore-analyze/visualize/maps/maps-connect-to-ems.md index af6988e0f..76a2acbca 100644 --- a/explore-analyze/visualize/maps/maps-connect-to-ems.md +++ b/explore-analyze/visualize/maps/maps-connect-to-ems.md @@ -41,8 +41,8 @@ Headers for the Tile Service JSON manifest describing the basemaps available. ::::::{tab-item} Curl Example ::::{dropdown} -```bash -curl -I 'https://tiles.maps.elastic.co/v9.0/manifest?elastic_tile_service_tos=agree&my_app_name=kibana&my_app_version=9.0.0-beta1' \ +```bash subs=true +curl -I 'https://tiles.maps.elastic.co/v9.0/manifest?elastic_tile_service_tos=agree&my_app_name=kibana&my_app_version={{version}}' \ -H 'User-Agent: curl/7.81.0' \ -H 'Accept: */*' \ -H 'Accept-Encoding: gzip, deflate, br' @@ -121,8 +121,8 @@ Headers for a vector tile asset in *protobuffer* format from the Tile Service. ::::::{tab-item} Curl Example ::::{dropdown} -```bash -$ curl -I 'https://tiles.maps.elastic.co/data/v3/1/1/0.pbf?elastic_tile_service_tos=agree&my_app_name=kibana&my_app_version=9.0.0-beta1' \ +```bash subs=true +$ curl -I 'https://tiles.maps.elastic.co/data/v3/1/1/0.pbf?elastic_tile_service_tos=agree&my_app_name=kibana&my_app_version={{version}}' \ -H 'User-Agent: curl/7.81.0' \ -H 'Accept: */*' \ -H 'Accept-Encoding: gzip, deflate, br' @@ -284,8 +284,8 @@ Headers for the File Service JSON manifest that declares all the datasets availa ::::::{tab-item} Curl example ::::{dropdown} -```bash -curl -I 'https://vector.maps.elastic.co/v9.0/manifest?elastic_tile_service_tos=agree&my_app_name=kibana&my_app_version=9.0.0-beta1' \ +```bash subs=true +curl -I 'https://vector.maps.elastic.co/v9.0/manifest?elastic_tile_service_tos=agree&my_app_name=kibana&my_app_version={{version}}' \ -H 'User-Agent: curl/7.81.0' \ -H 'Accept: */*' \ -H 'Accept-Encoding: gzip, deflate, br' @@ -375,8 +375,8 @@ Headers for a sample Dataset from the File Service in TopoJSON format. ::::::{tab-item} Curl example ::::{dropdown} -```bash -curl -I 'https://vector.maps.elastic.co/files/world_countries_v7.topo.json?elastic_tile_service_tos=agree&my_app_name=kibana&my_app_version=9.0.0-beta1' \ +```bash subs=true +curl -I 'https://vector.maps.elastic.co/files/world_countries_v7.topo.json?elastic_tile_service_tos=agree&my_app_name=kibana&my_app_version={{version}}' \ -H 'User-Agent: curl/7.81.0' \ -H 'Accept: */*' \ -H 'Accept-Encoding: gzip, deflate, br' @@ -484,26 +484,21 @@ If you cannot connect to Elastic Maps Service from the {{kib}} server or browser 1. Pull the {{hosted-ems}} Docker image. - ::::{warning} - Version 9.0.0-beta1 of {{hosted-ems}} has not yet been released. No Docker image is currently available for this version. - :::: - - - ```bash - docker pull docker.elastic.co/elastic-maps-service/elastic-maps-server:9.0.0-beta1 + ```bash subs=true + docker pull docker.elastic.co/elastic-maps-service/elastic-maps-server:{{version}} ``` 2. Optional: Install [Cosign](https://docs.sigstore.dev/system_config/installation/) for your environment. Then use Cosign to verify the {{es}} image’s signature. - ```sh + ```sh subs=true wget https://artifacts.elastic.co/cosign.pub - cosign verify --key cosign.pub docker.elastic.co/elastic-maps-service/elastic-maps-server:9.0.0-beta1 + cosign verify --key cosign.pub docker.elastic.co/elastic-maps-service/elastic-maps-server:{{version}} ``` The `cosign` command prints the check results and the signature payload in JSON format: - ```sh - Verification for docker.elastic.co/elastic-maps-service/elastic-maps-server:9.0.0-beta1 -- + ```sh subs=true + Verification for docker.elastic.co/elastic-maps-service/elastic-maps-server:{{version}} -- The following checks were performed on each of these signatures: - The cosign claims were validated - Existence of the claims in the transparency log was verified offline @@ -512,9 +507,9 @@ If you cannot connect to Elastic Maps Service from the {{kib}} server or browser 3. Start {{hosted-ems}} and expose the default port `8080`: - ```bash + ```bash subs=true docker run --rm --init --publish 8080:8080 \ - docker.elastic.co/elastic-maps-service/elastic-maps-server:9.0.0-beta1 + docker.elastic.co/elastic-maps-service/elastic-maps-server:{{version}} ``` Once {{hosted-ems}} is running, follow instructions from the webpage at `localhost:8080` to define a configuration file and optionally download a more detailed basemaps database. @@ -534,7 +529,7 @@ If you cannot connect to Elastic Maps Service from the {{kib}} server or browser | | | | --- | --- | -| $$$ems-host$$$`host` | Specifies the host of the backend server. To allow remote users to connect, set the value to the IP address or DNS name of the {{hosted-ems}} container. **Default: *your-hostname***. [Equivalent {{kib}} setting](kibana://reference/configuration-reference/general-settings.md#server-host). | +| $$$ems-host$$$`host` | Specifies the host of the backend server. To allow remote users to connect, set the value to the IP address or DNS name of the {{hosted-ems}} container. **Default: _your-hostname_**. [Equivalent {{kib}} setting](kibana://reference/configuration-reference/general-settings.md#server-host). | | `port` | Specifies the port used by the backend server. Default: **`8080`**. [Equivalent {{kib}} setting](kibana://reference/configuration-reference/general-settings.md#server-port). | | `basePath` | Specify a path at which to mount the server if you are running behind a proxy. This setting cannot end in a slash (`/`). [Equivalent {{kib}} setting](kibana://reference/configuration-reference/general-settings.md#server-basepath). | | `ui` | Controls the display of the status page and the layer preview. **Default: `true`** | @@ -566,10 +561,10 @@ If you cannot connect to Elastic Maps Service from the {{kib}} server or browser One way to configure {{hosted-ems}} is to provide `elastic-maps-server.yml` via bind-mounting. With `docker-compose`, the bind-mount can be specified like this: -```yaml +```yaml subs=true services: ems-server: - image: docker.elastic.co/elastic-maps-service/elastic-maps-server:9.0.0-beta1 + image: docker.elastic.co/elastic-maps-service/elastic-maps-server:{{version}} volumes: - ./elastic-maps-server.yml:/usr/src/app/server/config/elastic-maps-server.yml ``` @@ -586,10 +581,10 @@ All information that you include in environment variables is visible through the These variables can be set with `docker-compose` like this: -```yaml +```yaml subs=true services: ems-server: - image: docker.elastic.co/elastic-maps-service/elastic-maps-server:9.0.0-beta1 + image: docker.elastic.co/elastic-maps-service/elastic-maps-server:{{version}} environment: ELASTICSEARCH_HOST: http://elasticsearch.example.org ELASTICSEARCH_USERNAME: 'ems' diff --git a/manage-data/data-store/data-streams/time-series-data-stream-tsds.md b/manage-data/data-store/data-streams/time-series-data-stream-tsds.md index 00db340ef..526dabc8b 100644 --- a/manage-data/data-store/data-streams/time-series-data-stream-tsds.md +++ b/manage-data/data-store/data-streams/time-series-data-stream-tsds.md @@ -93,7 +93,7 @@ To mark a field as a metric, you must specify a metric type using the `time_seri Accepted metric types vary based on the field type: -:::::{dropdown} Valid values for `time_series_metric` +:::::{dropdown} Valid values for time_series_metric `counter` : A cumulative metric that only monotonically increases or resets to `0` (zero). For example, a count of errors or completed tasks. diff --git a/manage-data/data-store/mapping/dynamic-field-mapping.md b/manage-data/data-store/mapping/dynamic-field-mapping.md index 19fc993c3..1c7b75236 100644 --- a/manage-data/data-store/mapping/dynamic-field-mapping.md +++ b/manage-data/data-store/mapping/dynamic-field-mapping.md @@ -19,10 +19,8 @@ The field data types in the following table are the only [field data types](elas $$$dynamic-field-mapping-types$$$ -| | | -| --- | --- | -| | {{es}} data type | -| JSON data type | `"dynamic":"true"` | `"dynamic":"runtime"` | +| JSON data type | {{es}} data type
(`"dynamic":"true"`) | {{es}} data type
(`"dynamic":"runtime"`) | +| --- | --- | --- | | `null` | No field added | No field added | | `true` or `false` | `boolean` | `boolean` | | `double` | `float` | `double` | diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md index 8eaae3f99..a80994ea7 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md @@ -77,7 +77,7 @@ Metricbeat has [many modules](beats://reference/metricbeat/metricbeat-modules.md **Load the Metricbeat Kibana dashboards** -Metricbeat comes packaged with example dashboards, visualizations, and searches for visualizing Metricbeat data in Kibana. Before you can use the dashboards, you need to create the data view (formerly *index pattern*) *metricbeat-**, and load the dashboards into Kibana. This needs to be done from a local Beats machine that has access to the {{esh}} or {{ece}} deployment. +Metricbeat comes packaged with example dashboards, visualizations, and searches for visualizing Metricbeat data in Kibana. Before you can use the dashboards, you need to create the data view (formerly *index pattern*) _metricbeat-*_, and load the dashboards into Kibana. This needs to be done from a local Beats machine that has access to the {{esh}} or {{ece}} deployment. ::::{note} Beginning with Elastic Stack version 8.0, Kibana *index patterns* have been renamed to *data views*. To learn more, check the Kibana [What’s new in 8.0](https://www.elastic.co/guide/en/kibana/8.0/whats-new.html#index-pattern-rename) page. @@ -172,7 +172,7 @@ The system module is now enabled in Filebeat and it will be used the next time F **Load the Filebeat Kibana dashboards** -Filebeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Filebeat data in Kibana. Before you can use the dashboards, you need to create the data view *filebeat-**, and load the dashboards into Kibana. This needs to be done from a Beats machine that has access to the Internet. +Filebeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Filebeat data in Kibana. Before you can use the dashboards, you need to create the data view _filebeat-*_, and load the dashboards into Kibana. This needs to be done from a Beats machine that has access to the Internet. 1. Open a command line instance and then go to */filebeat-/* 2. Run the following command: @@ -207,7 +207,7 @@ Loaded Ingest pipelines 1. Exit the CLI. -The data views for *filebeat-** and *metricbeat-** are now available in {{es}}. To verify: +The data views for _filebeat-*_ and _metricbeat-*_ are now available in {{es}}. To verify: 1. [Login to Kibana](../../../deploy-manage/deploy/elastic-cloud/access-kibana.md). 2. Open the Kibana main menu and select **Management** and go to **Kibana** > **Data views**. diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md index f704a1b7f..42325a41c 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md @@ -289,7 +289,7 @@ filebeat.inputs: ``` ::::{tip} -You can specify a wildcard (***) character to indicate that all log files in the specified directory should be read. You can also use a wildcard to read logs from multiple directories. For example `/var/log/*/*.log`. +You can specify a wildcard (*\**) character to indicate that all log files in the specified directory should be read. You can also use a wildcard to read logs from multiple directories. For example `/var/log/*/*.log`. :::: @@ -356,7 +356,7 @@ The Filebeat data view is now available in Elasticsearch. To verify: 1. [Login to Kibana](../../../deploy-manage/deploy/elastic-cloud/access-kibana.md). 2. Open the {{kib}} main menu and select **Management** > **{{kib}}** > **Data views**. -3. In the search bar, search for *filebeat*. You should get *filebeat-** in the search results. +3. In the search bar, search for *filebeat*. You should get _filebeat-*_ in the search results. **Optional: Use an API key to authenticate** @@ -467,8 +467,8 @@ The next step is to confirm that the log data has successfully found it’s way 1. [Login to Kibana](../../../deploy-manage/deploy/elastic-cloud/access-kibana.md). 2. Open the {{kib}} main menu and select **Management** > **{{kib}}** > **Data views**. -3. In the search bar, search for *filebeat*. You should get *filebeat-** in the search results. -4. Select *filebeat-**. +3. In the search bar, search for *filebeat*. You should get _filebeat-*_ in the search results. +4. Select _filebeat-*_. The filebeat data view shows a list of fields and their details. @@ -479,7 +479,7 @@ Now it’s time to create visualizations based off of the application log data. 1. Open the Kibana main menu and select **Dashboard**, then **Create dashboard**. 2. Select **Create visualization**. The [Lens](../../../explore-analyze/visualize/lens.md) visualization editor opens. -3. In the data view dropdown box, select **filebeat-***, if it isn’t already selected. +3. In the data view dropdown box, select **filebeat-\***, if it isn’t already selected. 4. In the **CHART TYPE** dropdown box, select **Bar vertical stacked**, if it isn’t already selected. 5. Check that the [time filter](../../../explore-analyze/query-filter/filtering.md) is set to **Last 15 minutes**. 6. From the **Available fields** list, drag and drop the **@timestamp** field onto the visualization builder. diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md index 3d7e36701..a2f7509d3 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md @@ -186,7 +186,7 @@ filebeat.inputs: - /path/to/log/files/*.json ``` -You can specify a wildcard (***) character to indicate that all log files in the specified directory should be read. You can also use a wildcard to read logs from multiple directories. For example `/var/log/*/*.log`. +You can specify a wildcard (*\**) character to indicate that all log files in the specified directory should be read. You can also use a wildcard to read logs from multiple directories. For example `/var/log/*/*.log`. **Add the JSON input options** @@ -256,7 +256,7 @@ Beginning with Elastic Stack version 8.0, Kibana *index patterns* have been rena 1. [Login to Kibana](../../../deploy-manage/deploy/elastic-cloud/access-kibana.md). 2. Open the {{kib}} main menu and select **Management** > **{{kib}}** > **Data views**. -3. In the search bar, search for *filebeat*. You should get *filebeat-** in the search results. +3. In the search bar, search for *filebeat*. You should get _filebeat-*_ in the search results. **Optional: Use an API key to authenticate** @@ -361,8 +361,8 @@ The next step is to confirm that the log data has successfully found it’s way 1. [Login to Kibana](../../../deploy-manage/deploy/elastic-cloud/access-kibana.md). 2. Open the {{kib}} main menu and select **Management** > **{{kib}}** > **Data views**. -3. In the search bar, search for *filebeat_. You should get *filebeat-** in the search results. -4. Select *filebeat-**. +3. In the search bar, search for *filebeat_. You should get _filebeat-*_ in the search results. +4. Select _filebeat-*_. The filebeat data view shows a list of fields and their details. @@ -373,7 +373,7 @@ Now it’s time to create visualizations based off of the Python application log 1. Open the Kibana main menu and select **Dashboard**, then **Create dashboard**. 2. Select **Create visualization**. The [Lens](../../../explore-analyze/visualize/lens.md) visualization editor opens. -3. In the data view dropdown box, select **filebeat-**, if it isn’t already selected. +3. In the data view dropdown box, select *_filebeat-*_, if it isn’t already selected. 4. In the **Visualization type dropdown**, select **Bar vertical stacked**, if it isn’t already selected. 5. Check that the [time filter](../../../explore-analyze/query-filter/filtering.md) is set to **Last 15 minutes**. 6. From the **Available fields** list, drag and drop the **@timestamp** field onto the visualization builder. diff --git a/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md b/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md index a77079500..78eef6469 100644 --- a/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md +++ b/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md @@ -38,7 +38,7 @@ The **New pipeline from CSV** option lets you use a file with comma-separated va ## Test pipelines [ingest-pipelines-test-pipelines] -Before you use a pipeline in production, you should test it using sample documents. When creating or editing a pipeline in **{{ingest-pipelines-app}}**, click **Add documents***. In the ***Documents** tab, provide sample documents and click **Run the pipeline**: +Before you use a pipeline in production, you should test it using sample documents. When creating or editing a pipeline in **{{ingest-pipelines-app}}**, click **Add documents**. In the **Documents** tab, provide sample documents and click **Run the pipeline**: :::{image} /manage-data/images/serverless-ingest-pipelines-test.png :alt: Test a pipeline in {{ingest-pipelines-app}} diff --git a/reference/apm/cloud/apm-settings.md b/reference/apm/cloud/apm-settings.md index b957e5af3..80dfc1631 100644 --- a/reference/apm/cloud/apm-settings.md +++ b/reference/apm/cloud/apm-settings.md @@ -357,7 +357,7 @@ Allow anonymous access only for specified agents and/or services. This is primar : Specifies the minimum log level. One of *debug*, *info*, *warning*, or *error*. Defaults to *info*. `logging.selectors` -: The list of debugging-only selector tags used by different APM Server components. Use *** to enable debug output for all components. For example, add *publish* to display all the debug messages related to event publishing. +: The list of debugging-only selector tags used by different APM Server components. Use *\** to enable debug output for all components. For example, add *publish* to display all the debug messages related to event publishing. `logging.metrics.enabled` : If enabled, APM Server periodically logs its internal metrics that have changed in the last period. Defaults to *true*. diff --git a/reference/apm/observability/apm.md b/reference/apm/observability/apm.md index c6a9d422c..c109f8c82 100644 --- a/reference/apm/observability/apm.md +++ b/reference/apm/observability/apm.md @@ -8,7 +8,7 @@ mapped_pages: Elastic APM is an application performance monitoring system built on the {{stack}}. It allows you to monitor software services and applications in real time, by collecting detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. This makes it easy to pinpoint and fix performance problems quickly. :::{image} /reference/apm/images/observability-apm-app-landing.png -:alt: Applications UI in {kib} +:alt: Applications UI in {{kib}} :screenshot: ::: diff --git a/reference/data-analysis/kibana/canvas-functions.md b/reference/data-analysis/kibana/canvas-functions.md index 1746624c3..ea358568f 100644 --- a/reference/data-analysis/kibana/canvas-functions.md +++ b/reference/data-analysis/kibana/canvas-functions.md @@ -1278,7 +1278,11 @@ Adds a column calculated as the result of other columns. Changes are made only w ### `markdown` [markdown_fn] -Adds an element that renders Markdown text. TIP: Use the [`markdown`](#markdown_fn) function for single numbers, metrics, and paragraphs of text. +Adds an element that renders Markdown text. + +:::{tip} +Use the [`markdown`](#markdown_fn) function for single numbers, metrics, and paragraphs of text. +::: **Accepts:** `datatable`, `null` diff --git a/reference/fleet/add-fleet-server-cloud.md b/reference/fleet/add-fleet-server-cloud.md index d2f4d9090..c1b5076e3 100644 --- a/reference/fleet/add-fleet-server-cloud.md +++ b/reference/fleet/add-fleet-server-cloud.md @@ -70,7 +70,7 @@ To confirm that an {{integrations-server}} is available in your deployment: Don’t see the agent? Make sure your deployment includes an {{integrations-server}} instance. This instance is required to use {{fleet}}. :::{image} images/integrations-server-hosted-container.png -:alt: Hosted {integrations-server} +:alt: Hosted {{integrations-server}} :screenshot: ::: diff --git a/reference/fleet/add-fleet-server-mixed.md b/reference/fleet/add-fleet-server-mixed.md index cf93fe92c..7381ece6b 100644 --- a/reference/fleet/add-fleet-server-mixed.md +++ b/reference/fleet/add-fleet-server-mixed.md @@ -31,7 +31,7 @@ To deploy a self-managed {{fleet-server}} on-premises to work with an {{ech}} de * {{stack}} 7.13 or later * For version compatibility, {{es}} must be at the same or a later version than {{fleet-server}}, and {{fleet-server}} needs to be at the same or a later version than {{agent}} (not including patch releases). - * {{kib}} should be on the same minor version as {es} + * {{kib}} should be on the same minor version as {{es}} * {{ece}} 2.9 or later—​allows you to use a hosted {{fleet-server}} on {{ecloud}}. diff --git a/reference/fleet/agent-command-reference.md b/reference/fleet/agent-command-reference.md index db87d81a6..6924147c6 100644 --- a/reference/fleet/agent-command-reference.md +++ b/reference/fleet/agent-command-reference.md @@ -355,7 +355,7 @@ Start {{agent}} with {{fleet-server}} (running on a custom CA). This example ass * `ca.crt`: Root CA certificate * `fleet-server.crt`: {{fleet-server}} certificate * `fleet-server.key`: {{fleet-server}} private key -* `elasticsearch-ca.crt`: CA certificate to use to connect to {es} +* `elasticsearch-ca.crt`: CA certificate to use to connect to {{es}} ```shell elastic-agent enroll \ @@ -754,7 +754,7 @@ Start {{agent}} with {{fleet-server}} (running on a custom CA). This example ass * `ca.crt`: Root CA certificate * `fleet-server.crt`: {{fleet-server}} certificate * `fleet-server.key`: {{fleet-server}} private key -* `elasticsearch-ca.crt`: CA certificate to use to connect to {es} +* `elasticsearch-ca.crt`: CA certificate to use to connect to {{es}} ```shell elastic-agent install \ diff --git a/reference/fleet/agent-environment-variables.md b/reference/fleet/agent-environment-variables.md index 45eca433c..924024fb1 100644 --- a/reference/fleet/agent-environment-variables.md +++ b/reference/fleet/agent-environment-variables.md @@ -10,9 +10,9 @@ mapped_pages: Use environment variables to configure {{agent}} when running in a containerized environment. Variables on this page are grouped by action type: * [Common variables](#env-common-vars) -* [Configure {{kib}}:](#env-prepare-kibana-for-fleet) prepare the {{fleet}} plugin in {kib} -* [Configure {{fleet-server}}:](#env-bootstrap-fleet-server) bootstrap {{fleet-server}} on an {agent} -* [Configure {{agent}} and {{fleet}}:](#env-enroll-agent) enroll an {agent} +* [Configure {{kib}}:](#env-prepare-kibana-for-fleet) prepare the {{fleet}} plugin in {{kib}} +* [Configure {{fleet-server}}:](#env-bootstrap-fleet-server) bootstrap {{fleet-server}} on an {{agent}} +* [Configure {{agent}} and {{fleet}}:](#env-enroll-agent) enroll an {{agent}} ## Common variables [env-common-vars] diff --git a/reference/fleet/conditions-based-autodiscover.md b/reference/fleet/conditions-based-autodiscover.md index 397b41011..5d7d4a7b0 100644 --- a/reference/fleet/conditions-based-autodiscover.md +++ b/reference/fleet/conditions-based-autodiscover.md @@ -128,7 +128,7 @@ Inside the container [inspect the output](/reference/fleet/agent-command-referen elastic-agent inspect --variables --variables-wait 1s -c /etc/elastic-agent/agent.yml ``` -::::{dropdown} You should now be able to see the generated policy. If you look for the `scheduler`, it will look similar to this. +::::{dropdown} You should now be able to see the generated policy. If you look for the scheduler, it will look similar to this. ```yaml - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token hosts: diff --git a/reference/fleet/create-standalone-agent-policy.md b/reference/fleet/create-standalone-agent-policy.md index 416cc50aa..fb67f72b1 100644 --- a/reference/fleet/create-standalone-agent-policy.md +++ b/reference/fleet/create-standalone-agent-policy.md @@ -51,7 +51,7 @@ You don’t need {{fleet}} to perform the following steps, but on self-managed c ::: -The downloaded policy already contains a default {{es}} address and port for your setup. You may need to change them if you use a proxy or load balancer. Modify the policy, as required, making sure that you provide credentials for connecting to {es} +The downloaded policy already contains a default {{es}} address and port for your setup. You may need to change them if you use a proxy or load balancer. Modify the policy, as required, making sure that you provide credentials for connecting to {{es}} If you need to add integrations to the policy *after* deploying it, you’ll need to run through these steps again and re-deploy the updated policy to the host where {{agent}} is running. diff --git a/reference/fleet/data-streams-pipeline-tutorial.md b/reference/fleet/data-streams-pipeline-tutorial.md index 34196ceaf..aac4c636e 100644 --- a/reference/fleet/data-streams-pipeline-tutorial.md +++ b/reference/fleet/data-streams-pipeline-tutorial.md @@ -16,7 +16,7 @@ This tutorial explains how to add a custom ingest pipeline to an Elastic Integra Create a custom ingest pipeline that will be called by the default integration pipeline. In this tutorial, we’ll create a pipeline that adds a new field to our documents. -1. In {{kib}}, navigate to **Stack Management** → **Ingest Pipelines*** → ***Create pipeline** → **New pipeline**. +1. In {{kib}}, navigate to **Stack Management** → **Ingest Pipelines** → **Create pipeline** → **New pipeline**. 2. Name your pipeline. We’ll call this one, `add_field`. 3. Select **Add a processor**. Fill out the following information: diff --git a/reference/fleet/elastic-agent-container.md b/reference/fleet/elastic-agent-container.md index 04226dbf9..4b2e1eef6 100644 --- a/reference/fleet/elastic-agent-container.md +++ b/reference/fleet/elastic-agent-container.md @@ -298,7 +298,7 @@ agent.monitoring: The above configuration exposes a monitoring endpoint at `http://localhost:6791/processes`. -::::{dropdown} `http://localhost:6791/processes` output +::::{dropdown} http://localhost:6791/processes output ```json { "processes":[ @@ -344,7 +344,7 @@ The above configuration exposes a monitoring endpoint at `http://localhost:6791/ Each process ID in the `/processes` output can be accessed for more details. -::::{dropdown} `http://localhost:6791/processes/{{process-name}}` output +::::{dropdown} http://localhost:6791/processes/\{\{process-name\}\} output ```json { "beat":{ diff --git a/reference/fleet/elastic-agent-proxy-config.md b/reference/fleet/elastic-agent-proxy-config.md index 5125cb8a0..6369b586e 100644 --- a/reference/fleet/elastic-agent-proxy-config.md +++ b/reference/fleet/elastic-agent-proxy-config.md @@ -8,17 +8,17 @@ mapped_pages: Configure proxy settings for {{agent}} when it must connect through a proxy server to: * Download artifacts from `artifacts.elastic.co` for subprocesses or binary upgrades (use [Agent binary download settings](/reference/fleet/fleet-settings.md#fleet-agent-binary-download-settings)) -* Send data to {es} -* Retrieve agent policies from {fleet-server} +* Send data to {{es}} +* Retrieve agent policies from {{fleet-server}} * Retrieve agent policies from {{es}} (only needed for agents running {{fleet-server}}) :::{image} images/agent-proxy-server.png -:alt: Image showing connections between {agent} +:alt: Image showing connections between {{agent}} ::: If {{fleet}} is unable to access the {{package-registry}} because {{kib}} is behind a proxy server, you may also need to set the registry proxy URL in the {{kib}} configuration. :::{image} images/fleet-epr-proxy.png -:alt: Image showing connections between {{fleet}} and the {package-registry} +:alt: Image showing connections between {{fleet}} and the {{package-registry}} ::: diff --git a/reference/fleet/es-output-settings.md b/reference/fleet/es-output-settings.md index 8813b8d04..51264e27f 100644 --- a/reference/fleet/es-output-settings.md +++ b/reference/fleet/es-output-settings.md @@ -45,14 +45,14 @@ Specify these settings to send data over a secure connection to {{es}}. In the { **Performance tuning** $$$es-agent-performance-tuning$$$ : Choose one of the menu options to tune your {{agent}} performance when sending data to an {{es}} output. You can optimize for throughput, scale, latency, or you can choose a balanced (the default) set of performance specifications. Refer to [Performance tuning settings](#es-output-settings-performance-tuning-settings) for details about the setting values and their potential impact on performance. - You can also use the [Advanced YAML configuration](#es-output-settings-yaml-config) field to set custom values. Note that if you adjust any of the performance settings described in the following **Advanced YAML configuration*** section, the ***Performance tuning*** option automatically changes to `Custom` and cannot be changed. + You can also use the [Advanced YAML configuration](#es-output-settings-yaml-config) field to set custom values. Note that if you adjust any of the performance settings described in the following **Advanced YAML configuration** section, the **Performance tuning** option automatically changes to `Custom` and cannot be changed. - Performance tuning preset values take precedence over any settings that may be defined separately. If you want to change any setting, you need to use the `Custom` ***Performance tuning*** option and specify the settings in the ***Advanced YAML configuration*** field. + Performance tuning preset values take precedence over any settings that may be defined separately. If you want to change any setting, you need to use the `Custom` **Performance tuning** option and specify the settings in the **Advanced YAML configuration** field. For example, if you would like to use the balanced preset values except that you prefer a higher compression level, you can do so as follows: - 1. In {{fleet}}, open the ***Settings*** tab. - 2. In the ***Outputs*** section, select ***Add output*** to create a new output, or select the edit icon to edit an existing output. - 3. In the ***Add new output*** or the ***Edit output*** flyout, set ***Performance tuning** to `Custom`. + 1. In {{fleet}}, open the **Settings** tab. + 2. In the **Outputs** section, select **Add output** to create a new output, or select the edit icon to edit an existing output. + 3. In the **Add new output** or the **Edit output** flyout, set **Performance tuning** to `Custom`. 4. Refer to the list of [performance tuning preset values](#es-output-settings-performance-tuning-settings), and add the settings you prefer into the **Advanced YAML configuration** field. For the `balanced` presets, the yaml configuration would be as shown: ```yaml diff --git a/reference/fleet/example-kubernetes-fleet-managed-agent-helm.md b/reference/fleet/example-kubernetes-fleet-managed-agent-helm.md index 28d9f18db..28ed264d9 100644 --- a/reference/fleet/example-kubernetes-fleet-managed-agent-helm.md +++ b/reference/fleet/example-kubernetes-fleet-managed-agent-helm.md @@ -121,7 +121,7 @@ Now that you’ve {{agent}} and data is flowing, you can set up the {{k8s}} inte 1. In your {{ecloud}} deployment, from the {{kib}} menu open the **Integrations** page. 2. Run a search for `Kubernetes` and then select the {{k8s}} integration card. 3. On the {{k8s}} integration page, click **Add Kubernetes** to add the integration to your {{agent}} policy. -4. Scroll to the bottom of **Add Kubernetes integration** page. Under **Where to add this integration?*** select the ***Existing hosts** tab. On the **Agent policies** menu, select the agent policy that you created previously in the [Install {{agent}}](#agent-fleet-managed-helm-example-install-agent) steps. +4. Scroll to the bottom of **Add Kubernetes integration** page. Under **Where to add this integration?** select the **Existing hosts** tab. On the **Agent policies** menu, select the agent policy that you created previously in the [Install {{agent}}](#agent-fleet-managed-helm-example-install-agent) steps. You can leave all of the other integration settings at their default values. diff --git a/reference/fleet/fleet-agent-proxy-managed.md b/reference/fleet/fleet-agent-proxy-managed.md index 8125b4dba..54c9715ca 100644 --- a/reference/fleet/fleet-agent-proxy-managed.md +++ b/reference/fleet/fleet-agent-proxy-managed.md @@ -14,7 +14,7 @@ This page describes where a proxy server is allowed in your deployment and how t {{fleet}} central management enables you to define your proxy servers and then configure an output or the {{fleet-server}} to be reachable through any of these proxies. This also enables you to modify the proxy server details if needed without having to re-install {{agents}}. :::{image} images/agent-proxy-server-managed-deployment.png -:alt: Image showing connections between {{fleet}} managed {agent} +:alt: Image showing connections between {{fleet}} managed {{agent}} ::: In this scenario Fleet Server and Elasticsearch are deployed in {{ecloud}} and reachable on port 443. diff --git a/reference/fleet/fleet-enrollment-tokens.md b/reference/fleet/fleet-enrollment-tokens.md index 696c6c628..8bb972508 100644 --- a/reference/fleet/fleet-enrollment-tokens.md +++ b/reference/fleet/fleet-enrollment-tokens.md @@ -36,7 +36,7 @@ To create an enrollment token: Note that the token name you specify must be unique so as to avoid conflict with any existing API keys. :::{image} images/create-token.png - :alt: Enrollment tokens tab in {fleet} + :alt: Enrollment tokens tab in {{fleet}} :screenshot: ::: diff --git a/reference/fleet/fleet-roles-privileges.md b/reference/fleet/fleet-roles-privileges.md index 43c1029cf..69a6ca9cd 100644 --- a/reference/fleet/fleet-roles-privileges.md +++ b/reference/fleet/fleet-roles-privileges.md @@ -49,7 +49,7 @@ To create a new role with access to {{fleet}} and Integrations: 4. Specify a name for the role. 5. Leave the {{es}} settings at their defaults, or refer to [Security privileges](elasticsearch://reference/elasticsearch/security-privileges.md) for descriptions of the available settings. 6. In the {{kib}} section, select **Assign to space**. -7. In the **Spaces** menu, select *** All Spaces**. Since many Integrations assets are shared across spaces, the users need the {{kib}} privileges in all spaces. +7. In the **Spaces** menu, select **All Spaces**. Since many Integrations assets are shared across spaces, the users need the {{kib}} privileges in all spaces. 8. Expand the **Management** section. 9. Set **Fleet** privileges to **All**. 10. Choose the access level that you'd like the role to have with respect to {{fleet}} and integrations: diff --git a/reference/fleet/install-elastic-agents.md b/reference/fleet/install-elastic-agents.md index 5584da1a3..e850d515b 100644 --- a/reference/fleet/install-elastic-agents.md +++ b/reference/fleet/install-elastic-agents.md @@ -123,11 +123,11 @@ We tested using an AWS `m7i.large` instance type with 2 vCPUs, 8.0 GB of memory, | Resource | Throughput | Scale | | --- | --- | --- | -| **CPU*** | ~67% | ~20% | -| **RSS memory size*** | ~280 MB | ~220 MB | +| **CPU**[^1^](#footnote-1) | ~67% | ~20% | +| **RSS memory size**[^1^](#footnote-1) | ~280 MB | ~220 MB | | **Write network throughput** | ~3.5 MB/s | 480 KB/s | -* including all monitoring processes +^1^ $$$footnote-1$$$ including all monitoring processes Adding integrations will increase the memory used by the agent and its processes. diff --git a/reference/fleet/install-fleet-managed-elastic-agent.md b/reference/fleet/install-fleet-managed-elastic-agent.md index da739c2da..e61896f6d 100644 --- a/reference/fleet/install-fleet-managed-elastic-agent.md +++ b/reference/fleet/install-fleet-managed-elastic-agent.md @@ -68,7 +68,7 @@ To install an {{agent}} and enroll it in {{fleet}}: 2. Beginning with version 9.0, {{agent}} packages are available in multiple flavors. The default, "basic" flavor contains the components required for most use data collection use cases. A "servers" flavor is also available with additional components. You can adjust the `elastic-agent install` command as required to choose a different flavor. Refer to [{{agent}} installation flavors](./install-elastic-agents.md#elastic-agent-installation-flavors) for details. :::{image} images/kibana-agent-flyout.png - :alt: Add agent flyout in {kib} + :alt: Add agent flyout in {{kib}} :screenshot: ::: diff --git a/reference/fleet/install-standalone-elastic-agent.md b/reference/fleet/install-standalone-elastic-agent.md index ef2bc81ef..4ac0516ee 100644 --- a/reference/fleet/install-standalone-elastic-agent.md +++ b/reference/fleet/install-standalone-elastic-agent.md @@ -55,10 +55,11 @@ To install and run {{agent}} standalone: ::: :::{tab-item} DEB - IMPORTANT: + ::::::{important} * To simplify upgrading to future versions of {{agent}}, we recommended that you use the tarball distribution instead of the DEB distribution. * You can install {{agent}} in an `unprivileged` mode that does not require `root` privileges. Refer to [Run {{agent}} without administrative privileges](./elastic-agent-unprivileged.md) for details. + :::::: ```shell subs=true curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{stack-version}}-amd64.deb @@ -78,17 +79,19 @@ To install and run {{agent}} standalone: curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{stack-version}}-amd64.deb ELASTIC_AGENT_FLAVOR=servers sudo -E dpkg -i elastic-agent-{{stack-version}}-amd64.deb ``` - - **NOTE:** If you need to uninstall an {{agent}} package on Debian Linux, note that the `dpkg -r` command to remove a package leaves the flavor file in place. Instead, `dpkg -P` must to be used to purge all package content and reset the flavor. + + ::::::{note} + If you need to uninstall an {{agent}} package on Debian Linux, note that the `dpkg -r` command to remove a package leaves the flavor file in place. Instead, `dpkg -P` must to be used to purge all package content and reset the flavor. + :::::: ::: :::{tab-item} RPM - IMPORTANT: - + + ::::::{important} * To simplify upgrading to future versions of {{agent}}, we recommended that you use the tarball distribution instead of the RPM distribution. * You can install {{agent}} in an `unprivileged` mode that does not require `root` privileges. Refer to [Run {{agent}} without administrative privileges](./elastic-agent-unprivileged.md) for details. - + :::::: ```shell subs=true curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{{stack-version}}-x86_64.rpm @@ -96,7 +99,7 @@ To install and run {{agent}} standalone: ``` By default the {{agent}} basic flavor is installed. To install the servers flavor add the `ELASTIC_AGENT_FLAVOR=servers` parameter. Refer to [{{agent}} installation flavors](./install-elastic-agents.md#elastic-agent-installation-flavors) for details about the different flavors. - + You can use either of the two command formats to set the `ELASTIC_AGENT_FLAVOR` environment variable: ```shell subs=true @@ -169,7 +172,7 @@ To install and run {{agent}} standalone: ```shell sudo ./elastic-agent install - ``` + ``` By default the {{agent}} basic flavor is installed. To install the servers flavor, add the `--ìnstall-servers` parameter. Refer to [{{agent}} installation flavors](./install-elastic-agents.md#elastic-agent-installation-flavors) for details. @@ -195,7 +198,7 @@ To install and run {{agent}} standalone: sudo systemctl enable elastic-agent <1> sudo systemctl start elastic-agent ``` - + 1. The DEB package includes a service unit for Linux systems with systemd. On these systems, you can manage {{agent}} by using the usual systemd commands. If you don’t have systemd, run `sudo service elastic-agent start`. :::: diff --git a/reference/fleet/installation-layout.md b/reference/fleet/installation-layout.md index 6af7b2c24..2f59bf69f 100644 --- a/reference/fleet/installation-layout.md +++ b/reference/fleet/installation-layout.md @@ -20,7 +20,7 @@ mapped_pages: : Main {{agent}} {{fleet}} encrypted configuration `/Library/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson` -: Log files for {{agent}} and {{beats}} shippers ^1^ +: Log files for {{agent}} and {{beats}} shippers[^1^](#footnote-1) `/usr/bin/elastic-agent` : Shell wrapper installed into PATH @@ -39,7 +39,7 @@ You can install {{agent}} in a custom base path other than `/Library`. When ins : Main {{agent}} {{fleet}} encrypted configuration `/opt/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson` -: Log files for {{agent}} and {{beats}} shippers ^1^ +: Log files for {{agent}} and {{beats}} shippers[^1^](#footnote-1) `/usr/bin/elastic-agent` : Shell wrapper installed into PATH @@ -58,7 +58,7 @@ You can install {{agent}} in a custom base path other than `/opt`. When install : Main {{agent}} {{fleet}} encrypted configuration `C:\Program Files\Elastic\Agent\data\elastic-agent-*\logs\elastic-agent.ndjson` -: Log files for {{agent}} and {{beats}} shippers ^1^ +: Log files for {{agent}} and {{beats}} shippers[^1^](#footnote-1) You can install {{agent}} in a custom base path other than `C:\Program Files`. When installing {{agent}} with the `.\elastic-agent.exe install` command, use the `--base-path` CLI option to specify the custom base path. :::::: @@ -74,7 +74,7 @@ You can install {{agent}} in a custom base path other than `C:\Program Files`. : Main {{agent}} {{fleet}} encrypted configuration `/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson` -: Log files for {{agent}} and {{beats}} shippers ^1^ +: Log files for {{agent}} and {{beats}} shippers[^1^](#footnote-1) `/usr/bin/elastic-agent` : Shell wrapper installed into PATH @@ -91,7 +91,7 @@ You can install {{agent}} in a custom base path other than `C:\Program Files`. : Main {{agent}} {{fleet}} encrypted configuration `/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson` -: Log files for {{agent}} and {{beats}} shippers ^1^ +: Log files for {{agent}} and {{beats}} shippers[^1^](#footnote-1) `/usr/bin/elastic-agent` : Shell wrapper installed into PATH @@ -99,4 +99,4 @@ You can install {{agent}} in a custom base path other than `C:\Program Files`. ::::::: -^1^ Logs file names end with a date `(YYYYMMDD)` and optional number: `elastic-agent-YYYYMMDD.ndjson`, `elastic-agent-YYYYMMDD-1.ndjson`, and so on as new files are created during rotation. +^1^ $$$footnote-1$$$ Logs file names end with a date `(YYYYMMDD)` and optional number: `elastic-agent-YYYYMMDD.ndjson`, `elastic-agent-YYYYMMDD-1.ndjson`, and so on as new files are created during rotation. diff --git a/reference/fleet/kafka-output-settings.md b/reference/fleet/kafka-output-settings.md index de91d7c15..338a7669c 100644 --- a/reference/fleet/kafka-output-settings.md +++ b/reference/fleet/kafka-output-settings.md @@ -44,7 +44,7 @@ Select the mechanism that {{agent}} uses to authenticate with Kafka. Encryption : Set this option for traffic between {{agent}} and Kafka to use transport layer security. - When **Encryption*** is selected, the ***Server SSL certificate authorities** and **Verification mode** mode options become available. + When **Encryption** is selected, the **Server SSL certificate authorities** and **Verification mode** mode options become available. **Username / Password** $$$kafka-output-authentication-basic$$$ diff --git a/reference/fleet/ls-output-settings.md b/reference/fleet/ls-output-settings.md index 6e3d387f7..8b6140706 100644 --- a/reference/fleet/ls-output-settings.md +++ b/reference/fleet/ls-output-settings.md @@ -62,22 +62,119 @@ output { ## Advanced YAML configuration [ls-output-settings-yaml-config] -| Setting | Description | -| --- | --- | -| $$$output-logstash-fleet-settings-backoff.init-setting$$$
`backoff.init`
| (string) The number of seconds to wait before trying to reconnect to {{ls}} after a network error. After waiting `backoff.init` seconds, {{agent}} tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset.

**Default:** `1s`
| -| $$$output-logstash-fleet-settings-backoff.max-setting$$$
`backoff.max`
| (string) The maximum number of seconds to wait before attempting to connect to {{es}} after a network error.

**Default:** `60s`
| -| $$$output-logstash-fleet-settings-bulk_max_size-setting$$$
`bulk_max_size`
| (int) The maximum number of events to bulk in a single {{ls}} request.

Events can be collected into batches. {{agent}} will split batches larger than `bulk_max_size` into multiple batches.

Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.

Set this value to `0` to turn off the splitting of batches. When splitting is turned off, the queue determines the number of events to be contained in a batch.

**Default:** `2048`
| -| $$$output-logstash-fleet-settings-compression_level-setting$$$
`compression_level`
| (int) The gzip compression level. Set this value to `0` to disable compression. The compression level must be in the range of `1` (best speed) to `9` (best compression).

Increasing the compression level reduces network usage but increases CPU usage.
| -| $$$output-logstash-fleet-settings-escape_html-setting$$$
`escape_html`
| (boolean) Configures escaping of HTML in strings. Set to `true` to enable escaping.

**Default:** `false`
| -| $$$output-logstash-fleet-settings-index-setting$$$
`index`
| (string) The index root name to write events to.
| -| $$$output-logstash-fleet-settings-loadbalance-setting$$$
`loadbalance`
| If `true` and multiple {{ls}} hosts are configured, the output plugin load balances published events onto all {{ls}} hosts. If `false`, the output plugin sends all events to one host (determined at random) and switches to another host if the selected one becomes unresponsive.

With `loadbalance` enabled:

* {{agent}} reads batches of events and sends each batch to one {{ls}} worker dynamically, based on a work-queue shared between the outputs.
* If a connection drops, {{agent}} takes the disconnected {{ls}} worker out of its pool.
* {{agent}} tries to reconnect. If it succeeds, it re-adds the {{ls}} worker to the pool.
* If one of the {{ls}} nodes is slow but "healthy", it sends a keep-alive signal until the full batch of data is processed. This prevents {{agent}} from sending further data until it receives an acknowledgement signal back from {{ls}}. {{agent}} keeps all events in memory until after that acknowledgement occurs.

Without `loadbalance` enabled:

* {{agent}} picks a random {{ls}} host and sends batches of events to it. Due to the random algorithm, the load on the {{ls}} nodes should be roughly equal.
* In case of any errors, {{agent}} picks another {{ls}} node, also at random. If a connection to a host fails, the host is retried only if there are errors on the new connection.

**Default:** `false`

Example:

```yaml
outputs:
default:
type: logstash
hosts: ["localhost:5044", "localhost:5045"]
loadbalance: true
```
| -| $$$output-logstash-fleet-settings-max_retries-setting$$$
`max_retries`
| (int) The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped.

Set `max_retries` to a value less than 0 to retry until all events are published.

**Default:** `3`
| -| $$$output-logstash-fleet-settings-pipelining-setting$$$
`pipelining`
| (int) The number of batches to send asynchronously to {{ls}} while waiting for an ACK from {{ls}}. The output becomes blocking after the specified number of batches are written. Specify `0` to turn off pipelining.

**Default:** `2`
| -| $$$output-logstash-fleet-settings-proxy_use_local_resolver-setting$$$
`proxy_use_` `local_resolver`
| (boolean) Determines whether {{ls}} hostnames are resolved locally when using a proxy. If `false` and a proxy is used, name resolution occurs on the proxy server.

**Default:** `false`
| -| $$$output-logstash-fleet-settings-queue.mem.events-setting$$$
`queue.mem.events`
| The number of events the queue can store. This value should be evenly divisible by the smaller of `queue.mem.flush.min_events` or `bulk_max_size` to avoid sending partial batches to the output.

**Default:** `3200 events`
| -| $$$output-logstash-fleet-settings-queue.mem.flush.min_events-setting$$$
`queue.mem.flush.min_events`
| `flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`

**Default:** `1600 events`
| -| $$$output-logstash-fleet-settings-queue.mem.flush.timeout-setting$$$
`queue.mem.flush.timeout`
| (int) The maximum wait time for `queue.mem.flush.min_events` to be fulfilled. If set to 0s, events are available to the output immediately.

**Default:** `10s`
| -| $$$output-logstash-fleet-settings-slow_start-setting$$$
`slow_start`
| (boolean) If `true`, only a subset of events in a batch of events is transferred per transaction. The number of events to be sent increases up to `bulk_max_size` if no error is encountered. On error, the number of events per transaction is reduced again.

**Default:** `false`
| -| $$$output-logstash-fleet-settings-timeout-setting$$$
`timeout`
| (string) The number of seconds to wait for responses from the {{ls}} server before timing out.

**Default:** `30s`
| -| $$$output-logstash-fleet-settings-ttl-setting$$$
`ttl`
| (string) Time to live for a connection to {{ls}} after which the connection will be reestablished. This setting is useful when {{ls}} hosts represent load balancers. Because connections to {{ls}} hosts are sticky, operating behind load balancers can lead to uneven load distribution across instances. Specify a TTL on the connection to achieve equal connection distribution across instances.

**Default:** `0` (turns off the feature)

::::{note}
The `ttl` option is not yet supported on an asynchronous {{ls}} client (one with the `pipelining` option set).
::::

| -| $$$output-logstash-fleet-settings-worker-setting$$$
`worker`
| (int) The number of workers per configured host publishing events. Example: If you have two hosts and three workers, in total six workers are started (three for each host).

**Default:** `1`
| +`backoff.init` $$$output-logstash-fleet-settings-backoff.init-setting$$$ +: (string) The number of seconds to wait before trying to reconnect to {{ls}} after a network error. After waiting `backoff.init` seconds, {{agent}} tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up to `backoff.max`. After a successful connection, the backoff timer is reset. + + **Default:** `1s` + +`backoff.max` $$$output-logstash-fleet-settings-backoff.max-setting$$$ +: (string) The maximum number of seconds to wait before attempting to connect to {{es}} after a network error. + + **Default:** `60s` + +`bulk_max_size` $$$output-logstash-fleet-settings-bulk_max_size-setting$$$ +: (int) The maximum number of events to bulk in a single {{ls}} request. + + Events can be collected into batches. {{agent}} will split batches larger than `bulk_max_size` into multiple batches. + + Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput. + + Set this value to `0` to turn off the splitting of batches. When splitting is turned off, the queue determines the number of events to be contained in a batch. + + **Default:** `2048` + +`compression_level` $$$output-logstash-fleet-settings-compression_level-setting$$$ +: (int) The gzip compression level. Set this value to `0` to disable compression. The compression level must be in the range of `1` (best speed) to `9` (best compression). + + Increasing the compression level reduces network usage but increases CPU usage. + +`escape_html` $$$output-logstash-fleet-settings-escape_html-setting$$$ +: (boolean) Configures escaping of HTML in strings. Set to `true` to enable escaping. + + **Default:** `false` + +`index` $$$output-logstash-fleet-settings-index-setting$$$ +: (string) The index root name to write events to. + +`loadbalance` $$$output-logstash-fleet-settings-loadbalance-setting$$$ +: If `true` and multiple {{ls}} hosts are configured, the output plugin load balances published events onto all {{ls}} hosts. If `false`, the output plugin sends all events to one host (determined at random) and switches to another host if the selected one becomes unresponsive. + + With `loadbalance` enabled: + + * {{agent}} reads batches of events and sends each batch to one {{ls}} worker dynamically, based on a work-queue shared between the outputs. + * If a connection drops, {{agent}} takes the disconnected {{ls}} worker out of its pool. + * {{agent}} tries to reconnect. If it succeeds, it re-adds the {{ls}} worker to the pool. + * If one of the {{ls}} nodes is slow but "healthy", it sends a keep-alive signal until the full batch of data is processed. This prevents {{agent}} from sending further data until it receives an acknowledgement signal back from {{ls}}. {{agent}} keeps all events in memory until after that acknowledgement occurs. + + Without `loadbalance` enabled: + + * {{agent}} picks a random {{ls}} host and sends batches of events to it. Due to the random algorithm, the load on the {{ls}} nodes should be roughly equal. + * In case of any errors, {{agent}} picks another {{ls}} node, also at random. If a connection to a host fails, the host is retried only if there are errors on the new connection. + + **Default:** `false` + + Example: + + ```yaml + outputs: + default: + type: logstash + hosts: ["localhost:5044", "localhost:5045"] + loadbalance: true + ``` + +`max_retries` $$$output-logstash-fleet-settings-max_retries-setting$$$ +: (int) The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. + + Set `max_retries` to a value less than 0 to retry until all events are published. + + **Default:** `3` + +`pipelining` $$$output-logstash-fleet-settings-pipelining-setting$$$ +: (int) The number of batches to send asynchronously to {{ls}} while waiting for an ACK from {{ls}}. The output becomes blocking after the specified number of batches are written. Specify `0` to turn off pipelining. + + **Default:** `2` + +`proxy_use_` `local_resolver` $$$output-logstash-fleet-settings-proxy_use_local_resolver-setting$$$ +: (boolean) Determines whether {{ls}} hostnames are resolved locally when using a proxy. If `false` and a proxy is used, name resolution occurs on the proxy server. + + **Default:** `false` + +`queue.mem.events` $$$output-logstash-fleet-settings-queue.mem.events-setting$$$ +: The number of events the queue can store. This value should be evenly divisible by the smaller of `queue.mem.flush.min_events` or `bulk_max_size` to avoid sending partial batches to the output. + + **Default:** `3200 events` + +`queue.mem.flush.min_events` $$$output-logstash-fleet-settings-queue.mem.flush.min_events-setting$$$ +: `flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size` + + **Default:** `1600 events` + +`queue.mem.flush.timeout` $$$output-logstash-fleet-settings-queue.mem.flush.timeout-setting$$$ +: (int) The maximum wait time for `queue.mem.flush.min_events` to be fulfilled. If set to 0s, events are available to the output immediately. + + **Default:** `10s` + +`slow_start` $$$output-logstash-fleet-settings-slow_start-setting$$$ +: (boolean) If `true`, only a subset of events in a batch of events is transferred per transaction. The number of events to be sent increases up to `bulk_max_size` if no error is encountered. On error, the number of events per transaction is reduced again. + + **Default:** `false` + +`timeout` $$$output-logstash-fleet-settings-timeout-setting$$$ +: (string) The number of seconds to wait for responses from the {{ls}} server before timing out. + + **Default:** `30s` + +`ttl` $$$output-logstash-fleet-settings-ttl-setting$$$ +: (string) Time to live for a connection to {{ls}} after which the connection will be reestablished. This setting is useful when {{ls}} hosts represent load balancers. Because connections to {{ls}} hosts are sticky, operating behind load balancers can lead to uneven load distribution across instances. Specify a TTL on the connection to achieve equal connection distribution across instances. + + **Default:** `0` (turns off the feature) + + ::::{note} + The `ttl` option is not yet supported on an asynchronous {{ls}} client (one with the `pipelining` option set). + :::: + +`worker` $$$output-logstash-fleet-settings-worker-setting$$$ +: (int) The number of workers per configured host publishing events. Example: If you have two hosts and three workers, in total six workers are started (three for each host). + + **Default:** `1` diff --git a/reference/fleet/migrate-from-beats-to-elastic-agent.md b/reference/fleet/migrate-from-beats-to-elastic-agent.md index 606e384bc..5cf0029bf 100644 --- a/reference/fleet/migrate-from-beats-to-elastic-agent.md +++ b/reference/fleet/migrate-from-beats-to-elastic-agent.md @@ -99,7 +99,7 @@ After deploying an {{agent}} to a host, view details about the agent and inspect 3. Go back to the main {{fleet}} page and click the **Data streams** tab. You should be able to see the data streams for various logs and metrics from the host. This is out-of-the-box without any extra configuration or dashboard creation: :::{image} images/migration-agent-data-streams01.png - :alt: Screen showing data streams created by the {agent} + :alt: Screen showing data streams created by the {{agent}} :screenshot: ::: @@ -113,14 +113,14 @@ After deploying an {{agent}} to a host, view details about the agent and inspect For example, filter on `filebeat-*` to see the data ingested by {{filebeat}}. :::{image} images/migration-event-from-filebeat.png - :alt: Screen showing event from {filebeat} + :alt: Screen showing event from {{filebeat}} :screenshot: ::: Next, filter on `logs-*`. Notice that the document contains `data_stream.*` fields that come from logs ingested by the {{agent}}. :::{image} images/migration-event-from-agent.png - :alt: Screen showing event from {agent} + :alt: Screen showing event from {{agent}} :screenshot: ::: diff --git a/reference/fleet/monitor-elastic-agent.md b/reference/fleet/monitor-elastic-agent.md index 436856f31..59232b079 100644 --- a/reference/fleet/monitor-elastic-agent.md +++ b/reference/fleet/monitor-elastic-agent.md @@ -30,7 +30,7 @@ For more detail about how agents communicate their status to {{fleet}}, refer to To view the overall status of your {{fleet}}-managed agents, in {{kib}}, go to **Management → {{fleet}} → Agents**. :::{image} images/kibana-fleet-agents.png -:alt: Agents tab showing status of each {agent} +:alt: Agents tab showing status of each {{agent}} :screenshot: ::: diff --git a/reference/fleet/otel-agent-transform.md b/reference/fleet/otel-agent-transform.md index dc4d893cd..5239375ff 100644 --- a/reference/fleet/otel-agent-transform.md +++ b/reference/fleet/otel-agent-transform.md @@ -23,7 +23,7 @@ In order to configure an installed standalone {{agent}} to run as an OTel Collec You’ll need the following: 1. A suitable [{{es}} API key](grant-access-to-elasticsearch.md#create-api-key-standalone-agent) for authenticating on Elasticsearch -2. An installed standalone {agent} +2. An installed standalone {{agent}} 3. A valid OTel Collector configuration. In this example we’ll use the OTel sample configuration included in the {{agent}} repository: `otel_samples/platformlogs_hostmetrics.yml`. * [Linux version](https://github.com/elastic/elastic-agent/blob/main/internal/pkg/otel/samples/linux/platformlogs_hostmetrics.yml) @@ -41,13 +41,13 @@ To change a running standalone {{agent}} to run as an OTel Collector: * **Option 1:** Define environment variables for the {{agent}} service: * `ELASTIC_ENDPOINT`: The URL of the {{es}} instance where data will be sent - * `ELASTIC_API_KEY`: The API Key to use to authenticate with {es} + * `ELASTIC_API_KEY`: The API Key to use to authenticate with {{es}} * `STORAGE_DIR`: The directory where the OTel Collector can persist its state * **Option 2:** Replace the environment variable references in the sample configuration with the corresponding values: * `${env:ELASTIC_ENDPOINT}`:The URL of the {{es}} instance where data will be sent - * `${env:ELASTIC_API_KEY}`: The API Key to use to authenticate with {es} + * `${env:ELASTIC_API_KEY}`: The API Key to use to authenticate with {{es}} * `${env:STORAGE_DIR}`: The directory where the OTel Collector can persist its state 4. Save the opened OTel configuration as `elastic-agent.yml`, overwriting the default configuration of the installed agent. diff --git a/reference/fleet/secure-connections.md b/reference/fleet/secure-connections.md index 94dc4cb2c..89df7bd34 100644 --- a/reference/fleet/secure-connections.md +++ b/reference/fleet/secure-connections.md @@ -205,13 +205,13 @@ To encrypt traffic between {{agent}}s, {{fleet-server}}, and {{es}}: : CA certificate that the current {{fleet-server}} uses to connect to {{es}}. `certificate-authorities` - : List of paths to PEM-encoded CA certificate files that should be trusted for the other {{agents}} to connect to this {fleet-server} + : List of paths to PEM-encoded CA certificate files that should be trusted for the other {{agents}} to connect to this {{fleet-server}} `fleet-server-cert` - : The path for the PEM-encoded certificate (or certificate chain) which is associated with the fleet-server-cert-key to expose this {{fleet-server}} HTTPS endpoint to the other {agents} + : The path for the PEM-encoded certificate (or certificate chain) which is associated with the fleet-server-cert-key to expose this {{fleet-server}} HTTPS endpoint to the other {{agents}} `fleet-server-cert-key` - : Private key to use to expose this {{fleet-server}} HTTPS endpoint to the other {agents} + : Private key to use to expose this {{fleet-server}} HTTPS endpoint to the other {{agents}} `elastic-agent-cert` : The certificate to use as the client certificate for {{agent}}'s connections to {{fleet-server}}. diff --git a/reference/fleet/upgrade-elastic-agent.md b/reference/fleet/upgrade-elastic-agent.md index 248b6c22c..adc73eddf 100644 --- a/reference/fleet/upgrade-elastic-agent.md +++ b/reference/fleet/upgrade-elastic-agent.md @@ -67,7 +67,7 @@ To upgrade your {{agent}}s, go to **Management > {{fleet}} > Agents** in {{kib}} 2. From the **Actions** menu next to the agent, choose **Upgrade agent**. :::{image} images/upgrade-single-agent.png - :alt: Menu for upgrading a single {agent} + :alt: Menu for upgrading a single {{agent}} :screenshot: ::: @@ -76,7 +76,7 @@ To upgrade your {{agent}}s, go to **Management > {{fleet}} > Agents** in {{kib}} In certain cases the latest available {{agent}} version may not be recognized by {{kib}}. For instance, this occurs when the {{kib}} version is lower than the {{agent}} version. You can specify a custom version for {{agent}} to upgrade to by entering the version into the **Upgrade version** text field. :::{image} images/upgrade-agent-custom.png - :alt: Menu for upgrading a single {agent} + :alt: Menu for upgrading a single {{agent}} :screenshot: ::: diff --git a/reference/security/fields-and-object-schemas/timeline-object-schema.md b/reference/security/fields-and-object-schemas/timeline-object-schema.md index 8e6578b01..65efa7121 100644 --- a/reference/security/fields-and-object-schemas/timeline-object-schema.md +++ b/reference/security/fields-and-object-schemas/timeline-object-schema.md @@ -19,10 +19,8 @@ All column, dropzone, and filter fields must be [ECS fields](ecs://reference/ind This screenshot maps the Timeline UI components to their JSON objects: -:::{image} /reference/images/security-timeline-object-ui.png -:alt: timeline object ui -:screenshot: -::: +% TO DO: Use `:class: screenshot` +![timeline object ui](../images/security-timeline-object-ui.png) 1. [Title](#timeline-object-title) (`title`) 2. [Global notes](#timeline-object-global-notes) (`globalNotes`) diff --git a/reference/images/security-timeline-object-ui.png b/reference/security/images/security-timeline-object-ui.png similarity index 100% rename from reference/images/security-timeline-object-ui.png rename to reference/security/images/security-timeline-object-ui.png diff --git a/release-notes/elastic-security/deprecations.md b/release-notes/elastic-security/deprecations.md index 32ef5d51f..505391873 100644 --- a/release-notes/elastic-security/deprecations.md +++ b/release-notes/elastic-security/deprecations.md @@ -3,7 +3,7 @@ navigation_title: "Deprecations" --- # {{elastic-sec}} deprecations [elastic-security-deprecations] -Over time, certain Elastic functionality becomes outdated and is replaced or removed. To help with the transition, Elastic deprecates functionality for a period before removal, giving you time to update your applications. +Over time, certain Elastic functionality becomes outdated and is replaced or removed. To help with the transition, Elastic deprecates functionality for a period before removal, giving you time to update your applications. Review the deprecated functionality for {{elastic-sec}}. While deprecations have no immediate impact, we strongly encourage you update your implementation after you upgrade. To learn how to upgrade, check out [Upgrade](/deploy-manage/upgrade.md). @@ -12,13 +12,13 @@ Review the deprecated functionality for {{elastic-sec}}. While deprecations have % ::::{dropdown} Deprecation title % Description of the deprecation. % For more information, refer to [PR #](PR link). -% **Impact**
Impact of deprecation. +% **Impact**
Impact of deprecation. % **Action**
Steps for mitigating deprecation impact. % :::: ## 9.0.0 [elastic-security-900-deprecations] -::::{dropdown} Renames the `integration-assistant` plugin +::::{dropdown} Renames the integration-assistant plugin Renames the `integration-assistant` plugin to `automatic-import` to match the associated feature. For more information, refer to [#207325]({{kib-pull}}207325). :::: diff --git a/release-notes/fleet-elastic-agent/breaking-changes.md b/release-notes/fleet-elastic-agent/breaking-changes.md index 2d95ebddd..176f29b2e 100644 --- a/release-notes/fleet-elastic-agent/breaking-changes.md +++ b/release-notes/fleet-elastic-agent/breaking-changes.md @@ -7,7 +7,7 @@ Breaking changes can impact your Elastic applications, potentially disrupting no % ## Next version [fleet-elastic-agent-nextversion-breaking-changes] -% ::::{dropdown} Title of breaking change +% ::::{dropdown} Title of breaking change % Description of the breaking change. % For more information, check [PR #](PR link). @@ -18,7 +18,7 @@ Breaking changes can impact your Elastic applications, potentially disrupting no ## 9.0.0 [fleet-elastic-agent-900-breaking-changes] -::::{dropdown} Removed deprecated `epm` Fleet APIs +::::{dropdown} Removed deprecated epm Fleet APIs Removed `GET/POST/DELETE /epm/packages/:pkgkey` APIs in favor of the `GET/POST/DELETE /epm/packages/:pkgName/:pkgVersion`. **Impact**
@@ -44,14 +44,14 @@ Removed deprecated parameters or responses: For more information, check [#198313]({{kib-pull}}198313). :::: -::::{dropdown} Removed `cloud defend` support for {{agent}} +::::{dropdown} Removed cloud defend support for {{agent}} Support for `cloud-defend` (Defend for Containers) has been removed. The package has been removed from the {{agent}} packaging scripts and template Kubernetes files. For more information, check [#5481]({{agent-pull}}5481). :::: -::::{dropdown} Removed `username` and `password` default values for {{agent}} -The default values for `username` and `password` have been removed for when {{agent}} is running in container mode. The {es} `api_key` can now be set in that mode using the `ELASTICSEARCH_API_KEY` environment variable. +::::{dropdown} Removed username and password default values for {{agent}} +The default values for `username` and `password` have been removed for when {{agent}} is running in container mode. The {{es}} `api_key` can now be set in that mode using the `ELASTICSEARCH_API_KEY` environment variable. For more information, check [#5536]({{agent-pull}}5536). :::: @@ -62,14 +62,14 @@ The default Ubuntu-based Docker images used for {{agent}} have been changed to U For more information, check [#6427]({{agent-pull}}6427). :::: -::::{dropdown} Removed `--path.install` flag declaration from {{agent}} `paths` command +::::{dropdown} Removed --path.install flag declaration from {{agent}} paths command The deprecated `--path.install` flag declaration has been removed from the {{agent}} `paths` command and its use removed from the `container` and `enroll` commands. For more information, check [#6461]({{agent-pull}}6461) and [#2489]({{agent-pull}}2489). :::: ::::{dropdown} Changed the default {{agent}} installation and upgrade -The default {{agent}} installation and ugprade have been changed to include only the `agentbeat`, `endpoint-security` and `pf-host-agent` components. Additional components can be included using flags. +The default {{agent}} installation and ugprade have been changed to include only the `agentbeat`, `endpoint-security` and `pf-host-agent` components. Additional components can be included using flags. For more information, check [#6542]({{agent-pull}}6542). :::: @@ -90,13 +90,13 @@ For more information, check [#198799]({{kib-pull}}198799). For more information, check [#198799]({{kib-pull}}198799). :::: -::::{dropdown} Removed deprecated `topics` property for kafka output in favor of the `topic` property -Removed deprecated property `topics` from output APIs in response and requests (`(GET|POST|PUT) /api/fleet/outputs`) in favor of the `topic` property. +::::{dropdown} Removed deprecated topics property for kafka output in favor of the topic property +Removed deprecated property `topics` from output APIs in response and requests (`(GET|POST|PUT) /api/fleet/outputs`) in favor of the `topic` property. For more information, check [#199226]({{kib-pull}}199226). :::: -::::{dropdown} Limit pagination size to 100 when retrieving full policy or `withAgentCount` in Fleet +::::{dropdown} Limit pagination size to 100 when retrieving full policy or withAgentCount in Fleet In addition to the new pagination limit size of 100, retrieving agent policies without agent count is now the new default behavior, and a new query parameter `withAgentCount` was added to retrieve the agent count. For more information, check [#196887]({{kib-pull}}196887). diff --git a/release-notes/fleet-elastic-agent/index.md b/release-notes/fleet-elastic-agent/index.md index 554c6c68a..b034405b9 100644 --- a/release-notes/fleet-elastic-agent/index.md +++ b/release-notes/fleet-elastic-agent/index.md @@ -51,7 +51,7 @@ To check for security updates, go to [Security announcements for the Elastic sta ### Fixes [fleet-elastic-agent-900-fixes] * Fixes a validation error that occurs on multi-text input fields in {{fleet}} [#205768]({{kib-pull}}205768) * Adds a context timeout to the bulker flush in {{fleet-server}} so it times out if it takes more time than the deadline [#3986]({{fleet-server-pull}}3986) -* Removes a race condition that may occur when remote {es} outputs are used in {{fleet-server}} [#4171]({{fleet-server-pull}}4171) +* Removes a race condition that may occur when remote {{es}} outputs are used in {{fleet-server}} [#4171]({{fleet-server-pull}}4171) * Uses the chi/middleware.Throttle package to track in-flight requests and return a 429 response when the limit is reached in {{fleet-server}} [#4402]({{fleet-server-pull}}4402) and [#4400]({{fleet-server-issue}}4400) * Fixes logical race conditions in the kubernetes_secrets provider in {{agent}} [#6623]({{agent-pull}}6623) * Resolves the proxy to inject into agent component configurations using the Go http package in {{agent}} [#6675]({{agent-pull}}6675) and [#6209]({{agent-issue}}6209) diff --git a/solutions/observability/apm/api-keys.md b/solutions/observability/apm/api-keys.md index 7377f6c1a..ada08559f 100644 --- a/solutions/observability/apm/api-keys.md +++ b/solutions/observability/apm/api-keys.md @@ -189,21 +189,19 @@ APM Server provides a command line interface for creating, retrieving, invalidat The user requesting to create an API Key needs to have APM privileges used by the APM Server. A superuser, by default, has these privileges. - ::::{dropdown} Expand for more information on assigning these privileges to other users - To create an APM Server user with the required privileges for creating and managing API keys: - - 1. Create an **API key role**, called something like `apm_api_key`, that has the following `cluster` level privileges: - - | Privilege | Purpose | - | --- | --- | - | `manage_own_api_key` | Allow APM Server to create, retrieve, and invalidate API keys | - - 2. Depending on what the **API key role** will be used for, also assign the appropriate `apm` application-level privileges: - * To **receive Agent configuration**, assign `config_agent:read`. - * To **ingest agent data**, assign `event:write`. - * To **upload source maps**, assign `sourcemap:write`. - - :::: +::::{dropdown} Expand for more information on assigning these privileges to other users +To create an APM Server user with the required privileges for creating and managing API keys: +1. Create an **API key role**, called something like `apm_api_key`, that has the following `cluster` level privileges: + + | Privilege | Purpose | + | --- | --- | + | `manage_own_api_key` | Allow APM Server to create, retrieve, and invalidate API keys | + +2. Depending on what the **API key role** will be used for, also assign the appropriate `apm` application-level privileges: + * To **receive Agent configuration**, assign `config_agent:read`. + * To **ingest agent data**, assign `event:write`. + * To **upload source maps**, assign `sourcemap:write`. +:::: **`info`** : Query API Key(s). `--id` or `--name` required. diff --git a/solutions/observability/apm/apm-server-command-reference.md b/solutions/observability/apm/apm-server-command-reference.md index a7ab50e1d..c0feb7eb0 100644 --- a/solutions/observability/apm/apm-server-command-reference.md +++ b/solutions/observability/apm/apm-server-command-reference.md @@ -28,7 +28,7 @@ Some of the features described here require an Elastic license. For more informa | Commands | | | --- | --- | -| [`apikey`](#apm-apikey-command) | Manage API Keys for communication between APM agents and server. [8.6.0] | +| [`apikey`](#apm-apikey-command) | Manage API Keys for communication between APM agents and server.

**This was deprecated in 8.6.0.** Create API Keys through Kibana or the Elasticsearch REST API. Refer to [API keys](/solutions/observability/apm/api-keys.md). | | [`export`](#apm-export-command) | Exports the configuration, index template, or {{ilm-init}} policy to stdout. | | [`help`](#apm-help-command) | Shows help for any command. | | [`keystore`](#apm-keystore-command) | Manages the [secrets keystore](/solutions/observability/apm/secrets-keystore-for-secure-settings.md). | @@ -65,21 +65,21 @@ apm-server apikey SUBCOMMAND [FLAGS] The user requesting to create an API Key needs to have APM privileges used by the APM Server. A superuser, by default, has these privileges. - ::::{dropdown} Expand for more information on assigning these privileges to other users - To create an APM Server user with the required privileges for creating and managing API keys: +::::{dropdown} Expand for more information on assigning these privileges to other users +To create an APM Server user with the required privileges for creating and managing API keys: - 1. Create an **API key role**, called something like `apm_api_key`, that has the following `cluster` level privileges: +1. Create an **API key role**, called something like `apm_api_key`, that has the following `cluster` level privileges: - | Privilege | Purpose | - | --- | --- | - | `manage_own_api_key` | Allow APM Server to create, retrieve, and invalidate API keys | + | Privilege | Purpose | + | --- | --- | + | `manage_own_api_key` | Allow APM Server to create, retrieve, and invalidate API keys | - 2. Depending on what the **API key role** will be used for, also assign the appropriate `apm` application-level privileges: - * To **receive Agent configuration**, assign `config_agent:read`. - * To **ingest agent data**, assign `event:write`. - * To **upload source maps**, assign `sourcemap:write`. +2. Depending on what the **API key role** will be used for, also assign the appropriate `apm` application-level privileges: + * To **receive Agent configuration**, assign `config_agent:read`. + * To **ingest agent data**, assign `event:write`. + * To **upload source maps**, assign `sourcemap:write`. - :::: +:::: **`info`** : Query API Key(s). `--id` or `--name` required. @@ -172,9 +172,9 @@ Also see [Global flags](#apm-global-flags). **EXAMPLES** -```sh +```sh subs=true apm-server export config -apm-server export template --es.version 9.0.0-beta1 --index myindexname +apm-server export template --es.version {{version}} --index myindexname ``` ## `help` command [apm-help-command] diff --git a/solutions/observability/apm/configure-apm-server.md b/solutions/observability/apm/configure-apm-server.md index 1ebf0f41e..14a7d13cf 100644 --- a/solutions/observability/apm/configure-apm-server.md +++ b/solutions/observability/apm/configure-apm-server.md @@ -373,7 +373,7 @@ Allow anonymous access only for specified agents and/or services. This is primar : Specifies the minimum log level. One of *debug*, *info*, *warning*, or *error*. Defaults to *info*. `logging.selectors` -: The list of debugging-only selector tags used by different APM Server components. Use *** to enable debug output for all components. For example, add *publish* to display all the debug messages related to event publishing. +: The list of debugging-only selector tags used by different APM Server components. Use *\** to enable debug output for all components. For example, add *publish* to display all the debug messages related to event publishing. `logging.metrics.enabled` : If enabled, APM Server periodically logs its internal metrics that have changed in the last period. Defaults to *true*. diff --git a/solutions/observability/apm/configure-elasticsearch-output.md b/solutions/observability/apm/configure-elasticsearch-output.md index 5ad576e0a..fb77616a4 100644 --- a/solutions/observability/apm/configure-elasticsearch-output.md +++ b/solutions/observability/apm/configure-elasticsearch-output.md @@ -38,7 +38,7 @@ When sending data to a secured cluster through the `elasticsearch` output, APM S output.elasticsearch: hosts: ["https://myEShost:9200"] username: "apm_writer" - password: "{pwd}" + password: "YOUR_PASSWORD" ``` **API key authentication:** @@ -204,7 +204,7 @@ Authentication is specified in the APM Server configuration file: output.elasticsearch: hosts: ["https://myEShost:9200"] username: "apm_writer" <1> - password: "{pwd}" + password: "YOUR_PASSWORD" ``` 1. This user needs the privileges required to publish events to {{es}}. To create a user like this, see [Create a *writer* role](/solutions/observability/apm/create-assign-feature-roles-to-apm-server-users.md#apm-privileges-to-publish-events). diff --git a/solutions/observability/apm/configure-kafka-output.md b/solutions/observability/apm/configure-kafka-output.md index 8c442900c..95607b240 100644 --- a/solutions/observability/apm/configure-kafka-output.md +++ b/solutions/observability/apm/configure-kafka-output.md @@ -135,7 +135,7 @@ output.kafka: message: "ERR" ``` -This configuration results in topics named `critical-9.0.0-beta1`, `error-9.0.0-beta1`, and `logs-9.0.0-beta1`. +This configuration results in topics named _critical-{{version}}_, _error-{{version}}_, and _logs-{{version}}_. ### `key` [_key] diff --git a/solutions/observability/apm/configure-logstash-output.md b/solutions/observability/apm/configure-logstash-output.md index 1bdf3184d..bcc7644ea 100644 --- a/solutions/observability/apm/configure-logstash-output.md +++ b/solutions/observability/apm/configure-logstash-output.md @@ -90,12 +90,12 @@ output { Every event sent to {{ls}} contains a special field called [`@metadata`](logstash://reference/event-dependent-configuration.md#metadata) that you can use in {{ls}} for conditionals, filtering, indexing and more. APM Server sends the following `@metadata` to {{ls}}: -```json +```json subs=true { ... "@metadata": { "beat": "apm-server", <1> - "version": "9.0.0-beta1" <2> + "version": "{{version}}" <2> } } ``` @@ -218,7 +218,7 @@ The `proxy_use_local_resolver` option determines if {{ls}} hostnames are resolve #### `index` [apm-logstash-index] -The index root name to write events to. The default is `apm-server`. For example `"apm"` generates `"[apm-]9.0.0-beta1-YYYY.MM.DD"` indices (for example, `"apm-9.0.0-beta1-2017.04.26"`). +The index root name to write events to. The default is `apm-server`. For example `"apm"` generates `"[apm-]VERSION-YYYY.MM.DD"` indices (for example, _"apm-{{version}}-2017.04.26"_). ::::{note} This parameter’s value will be assigned to the `metadata.beat` field. It can then be accessed in {{ls}}'s output section as `%{[@metadata][beat]}`. diff --git a/solutions/observability/apm/configure-output-for-elasticsearch-service-on-elastic-cloud.md b/solutions/observability/apm/configure-output-for-elasticsearch-service-on-elastic-cloud.md index 486b3b5ee..663b19c97 100644 --- a/solutions/observability/apm/configure-output-for-elasticsearch-service-on-elastic-cloud.md +++ b/solutions/observability/apm/configure-output-for-elasticsearch-service-on-elastic-cloud.md @@ -25,7 +25,7 @@ Example: ```yaml cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw==" -cloud.auth: "elastic:{pwd}" +cloud.auth: "elastic:YOUR_PASSWORD" ``` These settings can be also specified at the command line, like this: diff --git a/solutions/observability/apm/create-assign-feature-roles-to-apm-server-users.md b/solutions/observability/apm/create-assign-feature-roles-to-apm-server-users.md index 213bf0975..df6b46c39 100644 --- a/solutions/observability/apm/create-assign-feature-roles-to-apm-server-users.md +++ b/solutions/observability/apm/create-assign-feature-roles-to-apm-server-users.md @@ -58,9 +58,9 @@ To grant an APM Server user the required privileges for writing events to {{es}} | Index | `create_doc` on `traces-apm*`, `logs-apm*`, and `metrics-apm*` indices | Write events into {{es}} | | Cluster | `monitor` | Allows cluster UUID checks, which are performed as part of APM server startup preconditionsif [Elasticsearch security](elasticsearch://reference/elasticsearch/configuration-reference/security-settings.md) is enabled (it is enabled by default), and allows a license check, which is required if [tail-based sampling](/solutions/observability/apm/transaction-sampling.md#apm-tail-based-sampling) is enabled. | -::::{note} -If you have explicitly disabled Elastic security *and* you are *not* using tail-based sampling, the `monitor` privilege may not be necessary. -:::: + ::::{note} + If you have explicitly disabled Elastic security *and* you are *not* using tail-based sampling, the `monitor` privilege may not be necessary. + :::: 1. Assign the **general writer role** to APM Server users who need to publish APM data. diff --git a/solutions/observability/apm/index.md b/solutions/observability/apm/index.md index 119efabec..562564003 100644 --- a/solutions/observability/apm/index.md +++ b/solutions/observability/apm/index.md @@ -12,7 +12,7 @@ applies_to: Elastic APM is an application performance monitoring system built on the {{stack}}. It allows you to monitor software services and applications in real time, by collecting detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. This makes it easy to pinpoint and fix performance problems quickly. :::{image} /solutions/images/observability-apm-app-landing.png -:alt: Applications UI in {kib} +:alt: Applications UI in {{kib}} :screenshot: ::: diff --git a/solutions/observability/apm/infrastructure.md b/solutions/observability/apm/infrastructure.md index d4e8c571a..aae25b58f 100644 --- a/solutions/observability/apm/infrastructure.md +++ b/solutions/observability/apm/infrastructure.md @@ -19,7 +19,7 @@ The **Infrastructure** tab provides information about the containers, pods, and * **Pods**: Uses the `kubernetes.pod.name` from the [APM metrics data streams](/solutions/observability/apm/metrics.md). * **Containers**: Uses the `container.id` from the [APM metrics data streams](/solutions/observability/apm/metrics.md). -* **Hosts**: If the application is containerized—​if the APM metrics documents include `container.id`-- the `host.name` is used from the infrastructure data streams (filtered by `container.id`). If not, `host.hostname` is used from the APM metrics data streams. +* **Hosts**: If the application is containerized—​if the APM metrics documents include `container.id`—the `host.name` is used from the infrastructure data streams (filtered by `container.id`). If not, `host.hostname` is used from the APM metrics data streams. :::{image} /solutions/images/serverless-infra.png :alt: Example view of the Infrastructure tab in the Applications UI diff --git a/solutions/observability/apm/installation-layout.md b/solutions/observability/apm/installation-layout.md index 5c875b036..baafde5d3 100644 --- a/solutions/observability/apm/installation-layout.md +++ b/solutions/observability/apm/installation-layout.md @@ -26,7 +26,7 @@ View the installation layout and default paths for both Fleet-managed APM Server : Main {{agent}} {{fleet}} encrypted configuration `/Library/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson` -: Log files for {{agent}} and {{beats}} shippers[¹](#footnote-1) +: Log files for {{agent}} and {{beats}} shippers[^1^](#footnote-1) `/usr/bin/elastic-agent` : Shell wrapper installed into PATH @@ -45,7 +45,7 @@ You can install {{agent}} in a custom base path other than `/Library`. When ins : Main {{agent}} {{fleet}} encrypted configuration `/opt/Elastic/Agent/data/elastic-agent-*/logs/elastic-agent.ndjson` -: Log files for {{agent}} and {{beats}} shippers[¹](#footnote-1) +: Log files for {{agent}} and {{beats}} shippers[^1^](#footnote-1) `/usr/bin/elastic-agent` : Shell wrapper installed into PATH @@ -64,7 +64,7 @@ You can install {{agent}} in a custom base path other than `/opt`. When install : Main {{agent}} {{fleet}} encrypted configuration `C:\Program Files\Elastic\Agent\data\elastic-agent-*\logs\elastic-agent.ndjson` -: Log files for {{agent}} and {{beats}} shippers[¹](#footnote-1) +: Log files for {{agent}} and {{beats}} shippers[^1^](#footnote-1) You can install {{agent}} in a custom base path other than `C:\Program Files`. When installing {{agent}} with the `.\elastic-agent.exe install` command, use the `--base-path` CLI option to specify the custom base path. :::::: @@ -80,7 +80,7 @@ You can install {{agent}} in a custom base path other than `C:\Program Files`. : Main {{agent}} {{fleet}} encrypted configuration `/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson` -: Log files for {{agent}} and {{beats}} shippers[¹](#footnote-1) +: Log files for {{agent}} and {{beats}} shippers[^1^](#footnote-1) `/usr/bin/elastic-agent` : Shell wrapper installed into PATH @@ -97,7 +97,7 @@ You can install {{agent}} in a custom base path other than `C:\Program Files`. : Main {{agent}} {{fleet}} encrypted configuration `/var/lib/elastic-agent/data/elastic-agent-*/logs/elastic-agent.ndjson` -: Log files for {{agent}} and {{beats}} shippers[¹](#footnote-1) +: Log files for {{agent}} and {{beats}} shippers[^1^](#footnote-1) `/usr/bin/elastic-agent` : Shell wrapper installed into PATH @@ -151,5 +151,4 @@ For the deb and rpm distributions, these paths are set in the init script or in ::::::: -$$$footnote-1$$$ -¹ Logs file names end with a date (`YYYYMMDD`) and optional number: `elastic-agent-YYYYMMDD.ndjson`, `elastic-agent-YYYYMMDD-1.ndjson`, and so on as new files are created during rotation. \ No newline at end of file +^1^ $$$footnote-1$$$ Logs file names end with a date (`YYYYMMDD`) and optional number: `elastic-agent-YYYYMMDD.ndjson`, `elastic-agent-YYYYMMDD-1.ndjson`, and so on as new files are created during rotation. \ No newline at end of file diff --git a/solutions/observability/apm/reduce-storage.md b/solutions/observability/apm/reduce-storage.md index 97b8939e7..d4fd63c52 100644 --- a/solutions/observability/apm/reduce-storage.md +++ b/solutions/observability/apm/reduce-storage.md @@ -19,7 +19,7 @@ See [Transaction sampling](/solutions/observability/apm/transaction-sampling.md) ## Enable span compression [enable_span_compression] -In some cases, APM agents may collect large amounts of very similar or identical spans in a transaction. These repeated, similar spans often don’t provide added benefit, especially if they are of very short duration. Span compression takes these similar spans and compresses them into a single span-- retaining important information but reducing processing and storage overhead. +In some cases, APM agents may collect large amounts of very similar or identical spans in a transaction. These repeated, similar spans often don’t provide added benefit, especially if they are of very short duration. Span compression takes these similar spans and compresses them into a single span—retaining important information but reducing processing and storage overhead. See [Span compression](/solutions/observability/apm/spans.md#apm-spans-span-compression) to learn more. diff --git a/solutions/observability/apm/switch-an-elastic-cloud-cluster-to-apm-integration.md b/solutions/observability/apm/switch-an-elastic-cloud-cluster-to-apm-integration.md index 0af998c04..6917e6eb3 100644 --- a/solutions/observability/apm/switch-an-elastic-cloud-cluster-to-apm-integration.md +++ b/solutions/observability/apm/switch-an-elastic-cloud-cluster-to-apm-integration.md @@ -15,7 +15,7 @@ applies_to: ## Upgrade the {{stack}} [apm-integration-upgrade-ess-1] -Use the {{ecloud}} Console to upgrade the {{stack}} to version 9.0.0-beta1. See the [Upgrade guide](/deploy-manage/upgrade/deployment-or-cluster.md) for details. +Use the {{ecloud}} Console to upgrade the {{stack}} to version {{version}}. See the [Upgrade guide](/deploy-manage/upgrade/deployment-or-cluster.md) for details. ## Switch to {{agent}} [apm-integration-upgrade-ess-2] diff --git a/solutions/observability/apm/switch-to-elastic-apm-integration.md b/solutions/observability/apm/switch-to-elastic-apm-integration.md index b0406e291..8d3a9cf7c 100644 --- a/solutions/observability/apm/switch-to-elastic-apm-integration.md +++ b/solutions/observability/apm/switch-to-elastic-apm-integration.md @@ -20,7 +20,7 @@ The APM integration offers a number of benefits over the standalone method of ru * More granular data control * Errors and metrics data streams are shared with other data sources — which means better long-term integration with the logs and metrics apps * Removes template inheritance for {{ilm-init}} policies and makes use of new {{es}} index and component templates -* Fixes `resource 'apm-9.0.0-beta1-$type' exists, but it is not an alias` error +* Fixes _resource 'apm-{{version}}-$type' exists, but it is not an alias_ error **APM Integration**: diff --git a/solutions/observability/apm/transaction-sampling.md b/solutions/observability/apm/transaction-sampling.md index 68ded94b5..0210d390e 100644 --- a/solutions/observability/apm/transaction-sampling.md +++ b/solutions/observability/apm/transaction-sampling.md @@ -194,7 +194,7 @@ stack: serverless: ``` -A sampled trace retains all data associated with it. A non-sampled trace drops all [span](/solutions/observability/apm/spans.md) and [transaction](/solutions/observability/apm/transactions.md) data.[¹](#footnote-1) Regardless of the sampling decision, all traces retain [error](/solutions/observability/apm/errors.md) data. +A sampled trace retains all data associated with it. A non-sampled trace drops all [span](/solutions/observability/apm/spans.md) and [transaction](/solutions/observability/apm/transactions.md) data.[^1^](#footnote-1) Regardless of the sampling decision, all traces retain [error](/solutions/observability/apm/errors.md) data. Some visualizations in the {{apm-app}}, like latency, are powered by aggregated transaction and span [metrics](/solutions/observability/apm/metrics.md). The way these metrics are calculated depends on the sampling method used: @@ -206,7 +206,7 @@ For all sampling methods, metrics are weighted by the inverse sampling rate of t These calculation methods ensure that the APM app provides the most accurate metrics possible given the sampling strategy in use, while also accounting for the head-based sampling rate to estimate the full population of traces. -¹ $$$footnote-1$$$ Real User Monitoring (RUM) traces are an exception to this rule. The {{kib}} apps that utilize RUM data depend on transaction events, so non-sampled RUM traces retain transaction data — only span data is dropped. +^1^ $$$footnote-1$$$ Real User Monitoring (RUM) traces are an exception to this rule. The {{kib}} apps that utilize RUM data depend on transaction events, so non-sampled RUM traces retain transaction data — only span data is dropped. ## Sample rates [_sample_rates] diff --git a/solutions/observability/apm/use-internal-collection-to-send-monitoring-data.md b/solutions/observability/apm/use-internal-collection-to-send-monitoring-data.md index 102f6e791..f2f6fbe16 100644 --- a/solutions/observability/apm/use-internal-collection-to-send-monitoring-data.md +++ b/solutions/observability/apm/use-internal-collection-to-send-monitoring-data.md @@ -30,7 +30,7 @@ Use internal collectors to send {{beats}} monitoring data directly to your monit monitoring: enabled: true cloud.id: 'staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw==' - cloud.auth: 'elastic:{pwd}' + cloud.auth: 'elastic:YOUR_PASSWORD' ``` If you configured a different output, such as {{ls}} or you want to send APM Server monitoring events to a separate {{es}} cluster (referred to as the *monitoring cluster*), you must specify additional configuration options. For example: diff --git a/solutions/observability/applications/llm-observability.md b/solutions/observability/applications/llm-observability.md index 1c86c6383..4b80fbdbc 100644 --- a/solutions/observability/applications/llm-observability.md +++ b/solutions/observability/applications/llm-observability.md @@ -6,25 +6,25 @@ navigation_title: "LLM Observability" While LLMs hold incredible transformative potential, they also bring complex challenges in reliability, performance, and cost management. Traditional monitoring tools require an evolved set of observability capabilities to ensure these models operate efficiently and effectively. To keep your LLM-powered applications reliable, efficient, cost-effective, and easy to troubleshoot, Elastic provides a powerful LLM observability framework including key metrics, logs, and traces, along with pre-configured, out-of-the-box dashboards that deliver deep insights into model prompts and responses, performance, usage, and costs. -Elastic’s end-to-end LLM observability is delivered through the following methods: +Elastic’s end-to-end LLM observability is delivered through the following methods: - Metrics and logs ingestion for LLM APIs (via [Elastic integrations](https://www.elastic.co/guide/en/integrations/current/introduction.html)) - APM tracing for LLM Models (via [instrumentation](https://elastic.github.io/opentelemetry/)) ## Metrics and logs ingestion for LLM APIs (via Elastic integrations) -Elastic’s LLM integrations now support the most widely adopted models, including OpenAI, Azure OpenAI, and a diverse range of models hosted on Amazon Bedrock and Google Vertex AI. Depending on the LLM provider you choose, the following table shows which source you can use and which type of data -- log or metrics -- you can collect. +Elastic’s LLM integrations now support the most widely adopted models, including OpenAI, Azure OpenAI, and a diverse range of models hosted on Amazon Bedrock and Google Vertex AI. Depending on the LLM provider you choose, the following table shows which source you can use and which type of data—log or metrics—you can collect. -| **LLM Provider** | **Source** | **Metrics** | **Logs** | +| **LLM Provider** | **Source** | **Metrics** | **Logs** | |--------|------------|------------| -| [Amazon Bedrock](https://www.elastic.co/guide/en/integrations/current/aws_bedrock.html)| [AWS CloudWatch Logs](https://github.com/elastic/integrations/tree/main/packages/aws_bedrock#compatibility) | ✅ | ✅ | -| [Azure OpenAI](https://www.elastic.co/guide/en/integrations/current/azure_openai.html)| [Azure Monitor and Event Hubs](https://github.com/elastic/integrations/tree/main/packages/azure_openai#azure-openai-integration) | ✅ | ✅ | -| [GCP Vertex AI](https://www.elastic.co/guide/en/integrations/current/gcp_vertexai.html) | [GCP Cloud Monitoring](https://github.com/elastic/integrations/tree/main/packages/gcp_vertexai#overview) | ✅ | 🚧 | -| [OpenAI](https://www.elastic.co/guide/en/integrations/current/openai.html) | [OpenAI Usage API](https://platform.openai.com/docs/api-reference/usage) | ✅| 🚧 | +| [Amazon Bedrock](https://www.elastic.co/guide/en/integrations/current/aws_bedrock.html)| [AWS CloudWatch Logs](https://github.com/elastic/integrations/tree/main/packages/aws_bedrock#compatibility) | ✅ | ✅ | +| [Azure OpenAI](https://www.elastic.co/guide/en/integrations/current/azure_openai.html)| [Azure Monitor and Event Hubs](https://github.com/elastic/integrations/tree/main/packages/azure_openai#azure-openai-integration) | ✅ | ✅ | +| [GCP Vertex AI](https://www.elastic.co/guide/en/integrations/current/gcp_vertexai.html) | [GCP Cloud Monitoring](https://github.com/elastic/integrations/tree/main/packages/gcp_vertexai#overview) | ✅ | 🚧 | +| [OpenAI](https://www.elastic.co/guide/en/integrations/current/openai.html) | [OpenAI Usage API](https://platform.openai.com/docs/api-reference/usage) | ✅| 🚧 | ## APM tracing for LLM models (via instrumentation) -Elastic offers specialized OpenTelemetry Protocol (OTLP) tracing for applications leveraging LLM models hosted on OpenAI, Azure, and Amazon Bedrock, providing a detailed view of request flows. This tracing capability captures critical insights, including the specific models used, request duration, errors encountered, token consumption per request, and the interaction between prompts and responses. Ideal for troubleshooting, APM tracing allows you to find exactly where the issue is happening with precision and efficiency in your LLM-powered application. +Elastic offers specialized OpenTelemetry Protocol (OTLP) tracing for applications leveraging LLM models hosted on OpenAI, Azure, and Amazon Bedrock, providing a detailed view of request flows. This tracing capability captures critical insights, including the specific models used, request duration, errors encountered, token consumption per request, and the interaction between prompts and responses. Ideal for troubleshooting, APM tracing allows you to find exactly where the issue is happening with precision and efficiency in your LLM-powered application. You can instrument the application with one of the following Elastic Distributions of OpenTelemetry (EDOT): @@ -36,7 +36,7 @@ EDOT includes many types of instrumentation. This [table](https://elastic.github ### Getting started -Check [these instructions](https://elastic.github.io/opentelemetry/use-cases/llm/) on how to setup and collect OpenTelemetry data for your LLM applications. +Check [these instructions](https://elastic.github.io/opentelemetry/use-cases/llm/) on how to setup and collect OpenTelemetry data for your LLM applications. ## Use cases @@ -51,7 +51,7 @@ For an SRE team optimizing a customer support system powered by Azure OpenAI, El ### Troubleshoot OpenAI-powered applications -Consider an enterprise utilizing an OpenAI model for real-time user interactions. Encountering unexplained delays, an SRE can use OpenAI tracing to dissect the transaction pathway, identify if one specific API call or model invocation is the bottleneck, and monitor a request to see the exact prompt and response between the user and the LLM. +Consider an enterprise utilizing an OpenAI model for real-time user interactions. Encountering unexplained delays, an SRE can use OpenAI tracing to dissect the transaction pathway, identify if one specific API call or model invocation is the bottleneck, and monitor a request to see the exact prompt and response between the user and the LLM. :::{image} /solutions/images/llm-openai-applications.png :alt: Troubleshoot OpenAI-powered applications diff --git a/solutions/observability/applications/user-experience.md b/solutions/observability/applications/user-experience.md index 18f9e7bd8..999852c38 100644 --- a/solutions/observability/applications/user-experience.md +++ b/solutions/observability/applications/user-experience.md @@ -51,13 +51,13 @@ You won’t be able to fix any problems from viewing these metrics alone, but yo ::::{dropdown} Metric reference First contentful paint -: Focuses on the initial rendering and measures the time from when the page starts loading to when any part of the page’s content is displayed on the screen. The agent uses the [Paint timing API](https://www.w3.org/TR/paint-timing/#first-contentful-paint) available in the browser to capture the timing information.[¹](#footnote-1) +: Focuses on the initial rendering and measures the time from when the page starts loading to when any part of the page’s content is displayed on the screen. The agent uses the [Paint timing API](https://www.w3.org/TR/paint-timing/#first-contentful-paint) available in the browser to capture the timing information.[^1^](#footnote-1) Total blocking time -: The sum of the blocking time (duration above 50 ms) for each long task that occurs between the First contentful paint and the time when the transaction is completed. Total blocking time is a great companion metric for [Time to interactive](https://web.dev/tti/) (TTI) which is lab metric and not available in the field through browser APIs. The agent captures TBT based on the number of long tasks that occurred during the page load lifecycle.[²](#footnote-2) +: The sum of the blocking time (duration above 50 ms) for each long task that occurs between the First contentful paint and the time when the transaction is completed. Total blocking time is a great companion metric for [Time to interactive](https://web.dev/tti/) (TTI) which is lab metric and not available in the field through browser APIs. The agent captures TBT based on the number of long tasks that occurred during the page load lifecycle.[^2^](#footnote-2) `Long Tasks` -: A long task is any user activity or browser task that monopolize the UI thread for extended periods (greater than 50 milliseconds) and block other critical tasks (frame rate or input latency) from being executed.[³](#footnote-3) +: A long task is any user activity or browser task that monopolize the UI thread for extended periods (greater than 50 milliseconds) and block other critical tasks (frame rate or input latency) from being executed.[^3^](#footnote-3) Number of long tasks : The number of long tasks. @@ -77,17 +77,16 @@ These metrics tell an important story about how users experience your website. B [Core Web Vitals](https://web.dev/vitals/) is a recent initiative from Google to introduce a new set of metrics that better categorize good and bad sites by quantifying the real-world user experience. This is done by looking at three key metrics: loading performance, visual stability, and interactivity: Largest contentful paint (LCP) -: Loading performance. LCP is the timestamp when the main content of a page has likely loaded. To users, this is the *perceived* loading speed of your site. To provide a good user experience, Google recommends an LCP of fewer than 2.5 seconds.[⁴](#footnote-4) +: Loading performance. LCP is the timestamp when the main content of a page has likely loaded. To users, this is the *perceived* loading speed of your site. To provide a good user experience, Google recommends an LCP of fewer than 2.5 seconds.[^4^](#footnote-4) Interaction to next paint (INP) -: Responsiveness to user interactions. The INP value comes from measuring the latency of all click, tap, and keyboard interactions that happen throughout a single page visit and choosing the longest interaction observed. To provide a good user experience, Google recommends an INP of fewer than 200 milliseconds.[⁵](#footnote-5) +: Responsiveness to user interactions. The INP value comes from measuring the latency of all click, tap, and keyboard interactions that happen throughout a single page visit and choosing the longest interaction observed. To provide a good user experience, Google recommends an INP of fewer than 200 milliseconds.[^5^](#footnote-5) ::::{note} Previous {{kib}} versions included the metric [First input delay (FID)](https://web.dev/fid/) in the User Experience app. Starting with version 8.12, FID was replaced with *Interaction to next paint (INP)*. The APM RUM agent started collecting INP data in version 5.16.0. If you use an earlier version of the RUM agent with {{kib}} version 8.12 or later, it will *not* capture INP data and there will be *no data* to show in the User Experience app: -| | | | -| --- | --- | --- | | | **Kibana version ≥ 8.12** | **Kibana version < 8.12** | +| --- | --- | --- | | **RUM agent version ≥ 5.16.0** | INP data will be visible. | FID data will be visible. | | **RUM agent version < 5.16.0** | The INP section will be empty. | FID data will be visible. | @@ -96,10 +95,10 @@ RUM agent version ≥ 5.16.0 will continue to collect FID metrics so, while FID :::: Cumulative layout shift (CLS) -: Visual stability. Is content moving around because of `async` resource loading or dynamic content additions? CLS measures these frustrating unexpected layout shifts. To provide a good user experience, Google recommends a CLS score of less than `.1`.[⁶](#footnote-6) +: Visual stability. Is content moving around because of `async` resource loading or dynamic content additions? CLS measures these frustrating unexpected layout shifts. To provide a good user experience, Google recommends a CLS score of less than `.1`.[^6^](#footnote-6) ::::{tip} -[Beginning in May 2021](https://webmasters.googleblog.com/2020/11/timing-for-page-experience.md), Google will start using Core Web Vitals as part of their ranking algorithm and will open up the opportunity for websites to rank in the "top stories" position without needing to leverage [AMP](https://amp.dev/).[⁷](#footnote-7) +[Beginning in May 2021](https://webmasters.googleblog.com/2020/11/timing-for-page-experience.md), Google will start using Core Web Vitals as part of their ranking algorithm and will open up the opportunity for websites to rank in the "top stories" position without needing to leverage [AMP](https://amp.dev/).[^7^](#footnote-7) :::: ### Load/view distribution [user-experience-distribution] @@ -130,10 +129,10 @@ Have a question? Want to leave feedback? Visit the [{{user-experience}} discussi #### References [user-experience-references] -¹ $$$footnote-1$$$ More information: [developer.mozilla.org](https://developer.mozilla.org/en-US/docs/Glossary/First_contentful_paint)
-² $$$footnote-2$$$ More information: [web.dev](https://web.dev/tbt/)
-³ $$$footnote-3$$$ More information: [developer.mozilla.org](https://developer.mozilla.org/en-US/docs/Web/API/Long_Tasks_API)
-⁴ $$$footnote-4$$$ Source: [web.dev](https://web.dev/lcp/)
-⁵ $$$footnote-5$$$ Source: [web.dev](https://web.dev/articles/inp)
-⁶ $$$footnote-6$$$ Source: [web.dev](https://web.dev/cls/)
-⁷ $$$footnote-7$$$ Source: [webmasters.googleblog.com](https://webmasters.googleblog.com/2020/05/evaluating-page-experience.md)
\ No newline at end of file +^1^ $$$footnote-1$$$ More information: [developer.mozilla.org](https://developer.mozilla.org/en-US/docs/Glossary/First_contentful_paint)
+^2^ $$$footnote-2$$$ More information: [web.dev](https://web.dev/tbt/)
+^3^ $$$footnote-3$$$ More information: [developer.mozilla.org](https://developer.mozilla.org/en-US/docs/Web/API/Long_Tasks_API)
+^4^ $$$footnote-4$$$ Source: [web.dev](https://web.dev/lcp/)
+^5^ $$$footnote-5$$$ Source: [web.dev](https://web.dev/articles/inp)
+^6^ $$$footnote-6$$$ Source: [web.dev](https://web.dev/cls/)
+^7^ $$$footnote-7$$$ Source: [webmasters.googleblog.com](https://webmasters.googleblog.com/2020/05/evaluating-page-experience.md)
\ No newline at end of file diff --git a/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md b/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md index ebdbd7fc4..9da48cf6d 100644 --- a/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md +++ b/solutions/observability/incident-management/create-an-slo-burn-rate-rule.md @@ -29,7 +29,7 @@ Choose which SLO to monitor and then define multiple burn rate windows with appr ::: ::::{tip} -These steps show how to use the **Alerts** UI. You can also create an SLO burn rate rule directly from **Observability*** → ***SLOs**. Click the more options icon (![More options](/solutions/images/serverless-boxesVertical.svg "")) to the right of the SLO you want to add a burn rate rule for, and select **![Bell](/solutions/images/serverless-bell.svg "") Create new alert rule** from the menu. +These steps show how to use the **Alerts** UI. You can also create an SLO burn rate rule directly from **Observability** → **SLOs**. Click the more options icon (![More options](/solutions/images/serverless-boxesVertical.svg "")) to the right of the SLO you want to add a burn rate rule for, and select **![Bell](/solutions/images/serverless-bell.svg "") Create new alert rule** from the menu. When you use the UI to create an SLO, a default SLO burn rate alert rule is created automatically. The burn rate rule will use the default configuration and no connector. You must configure a connector if you want to receive alerts for SLO breaches. diff --git a/solutions/observability/incident-management/create-tls-certificate-rule.md b/solutions/observability/incident-management/create-tls-certificate-rule.md index 2c2b865ea..47c2681db 100644 --- a/solutions/observability/incident-management/create-tls-certificate-rule.md +++ b/solutions/observability/incident-management/create-tls-certificate-rule.md @@ -12,7 +12,11 @@ In {{kib}}, you can create a rule that notifies you when one or more of your mon There are two types of TLS certificate rule: * [Synthetics TLS certificate rule](#tls-rule-synthetics) for use with [Elastic Synthetics](/solutions/observability/synthetics/index.md). -* [8.15.0] [Uptime TLS rule](#tls-rule-uptime) for use with the {{uptime-app}}. +* [Uptime TLS rule](#tls-rule-uptime) for use with the {{uptime-app}}. + + :::{admonition} Deprecated in 8.15.0 + Uptime was deprecated in 8.15.0. Use Synthetics instead. + ::: ## Synthetics TLS certificate rule [tls-rule-synthetics] diff --git a/solutions/observability/infra-and-hosts/add-symbols-for-native-frames.md b/solutions/observability/infra-and-hosts/add-symbols-for-native-frames.md index 15276a10e..8570d973e 100644 --- a/solutions/observability/infra-and-hosts/add-symbols-for-native-frames.md +++ b/solutions/observability/infra-and-hosts/add-symbols-for-native-frames.md @@ -14,8 +14,8 @@ To see function names and line numbers in traces of applications written in prog Click the appropriate link for your system to download the `symbtool` binary: -* [x86_64](https://artifacts.elastic.co/downloads/prodfiler/symbtool-9.0.0-beta1-linux-x86_64.tar.gz) -* [ARM64](https://artifacts.elastic.co/downloads/prodfiler/symbtool-9.0.0-beta1-linux-arm64.tar.gz) +* [x86_64](https://artifacts.elastic.co/downloads/prodfiler/symbtool-{{version}}-linux-x86_64.tar.gz) +* [ARM64](https://artifacts.elastic.co/downloads/prodfiler/symbtool-{{version}}-linux-arm64.tar.gz) ::::{note} The `symbtool` binary currently requires a Linux machine. diff --git a/solutions/observability/infra-and-hosts/operate-universal-profiling-backend.md b/solutions/observability/infra-and-hosts/operate-universal-profiling-backend.md index 1a01df27f..188ebb322 100644 --- a/solutions/observability/infra-and-hosts/operate-universal-profiling-backend.md +++ b/solutions/observability/infra-and-hosts/operate-universal-profiling-backend.md @@ -3,7 +3,7 @@ navigation_title: "Operate the backend" mapped_pages: - https://www.elastic.co/guide/en/observability/current/profiling-self-managed-ops.html applies_to: - stack: + stack: --- @@ -121,7 +121,7 @@ The following sections show the most relevant metrics exposed by the backend bin * `collection_agent.indexing.bulk_indexer_failure_count`: number of times the bulk indexer failed to ingest data in Elasticsearch. * `collection_agent.indexing.document_count.*`: counter that represents the number of documents ingested in Elasticsearch for each index; can be used to calculate the rate of ingestion for each index. * `grpc_server_handling_seconds`: histogram of the time spent by the gRPC server to handle requests. -* `grpc_server_msg_received_total: count of messages received by the gRPC server; can be used to calculate the rate of ingestion for each RPC. +* `grpc_server_msg_received_total`: count of messages received by the gRPC server; can be used to calculate the rate of ingestion for each RPC. * `grpc_server_handled_total`: count of messages processed by the gRPC server; can be used to calculate the availability of the gRPC server for each RPC. **Symbolizer metrics** diff --git a/solutions/observability/infra-and-hosts/step-4-run-backend-applications.md b/solutions/observability/infra-and-hosts/step-4-run-backend-applications.md index baefa3d8a..986255054 100644 --- a/solutions/observability/infra-and-hosts/step-4-run-backend-applications.md +++ b/solutions/observability/infra-and-hosts/step-4-run-backend-applications.md @@ -499,16 +499,16 @@ docker logs pf-elastic-symbolizer For x86_64 - ```shell - wget -O- "https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-collector-9.0.0-beta1-linux-x86_64.tar.gz" | tar xzf - - wget -O- "https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-symbolizer-9.0.0-beta1-linux-x86_64.tar.gz" | tar xzf - + ```shell subs=true + wget -O- "https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-collector-{{version}}-linux-x86_64.tar.gz" | tar xzf - + wget -O- "https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-symbolizer-{{version}}-linux-x86_64.tar.gz" | tar xzf - ``` For ARM64 - ```shell - wget -O- "https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-collector-9.0.0-beta1-linux-arm64.tar.gz" | tar xzf - - wget -O- "https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-symbolizer-9.0.0-beta1-linux-arm64.tar.gz" | tar xzf - + ```shell subs=true + wget -O- "https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-collector-{{version}}-linux-arm64.tar.gz" | tar xzf - + wget -O- "https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-symbolizer-{{version}}-linux-arm64.tar.gz" | tar xzf - ``` 2. Copy the `pf-elastic-collector` and `pf-elastic-symbolizer` binaries to a directory in the machine’s `PATH`. diff --git a/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md b/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md index 57fb78789..129b1dbee 100644 --- a/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md +++ b/solutions/observability/infra-and-hosts/view-infrastructure-metrics-by-resource-type.md @@ -16,7 +16,7 @@ To open the **Infrastructure inventory** page in: - **Serverless,** go to **Infrastructure inventory** in your Observability Serverless project. :::{image} /solutions/images/observability-metrics-app.png -:alt: Infrastructure UI in {kib} +:alt: Infrastructure UI in {{kib}} :screenshot: ::: diff --git a/solutions/observability/synthetics/analyze-data.md b/solutions/observability/synthetics/analyze-data.md index 9cd3c2438..c1424d5ed 100644 --- a/solutions/observability/synthetics/analyze-data.md +++ b/solutions/observability/synthetics/analyze-data.md @@ -51,7 +51,7 @@ When you go to an individual monitor’s page, you’ll see much more detail abo * The **![Pencil icon](/solutions/images/observability-pencil.svg "") Edit monitor** button that allows you to edit the monitor’s configuration. :::{image} /solutions/images/observability-synthetics-analyze-individual-monitor-header.png -:alt: Header at the top of the individual monitor page for all monitor types in the {synthetics-app} +:alt: Header at the top of the individual monitor page for all monitor types in the {{synthetics-app}} :screenshot: ::: @@ -62,7 +62,7 @@ Each individual monitor’s page has three tabs: Overview, History, and Errors. The **Overview** tab has information about the monitor availability, duration, and any errors that have occurred since the monitor was created. The *Duration trends* chart displays the timing for each check that was performed in the last 30 days. This visualization helps you to gain insights into how quickly requests resolve by the targeted endpoint and gives you a sense of how frequently a host or endpoint was down. :::{image} /solutions/images/observability-synthetics-analyze-individual-monitor-details.png -:alt: Details in the Overview tab on the individual monitor page for all monitor types in the {synthetics-app} +:alt: Details in the Overview tab on the individual monitor page for all monitor types in the {{synthetics-app}} :screenshot: ::: @@ -73,14 +73,14 @@ The **History** tab has information on every time the monitor has run. It includ For browser monitors, you can click on any run in the **Test runs** list to see the details for that run. Read more about what information is included the in [Details for one run](/solutions/observability/synthetics/analyze-data.md#synthetics-analyze-one-run) section below. :::{image} /solutions/images/observability-synthetics-analyze-individual-monitor-history.png -:alt: The History tab on the individual monitor page for all monitor types in the {synthetics-app} +:alt: The History tab on the individual monitor page for all monitor types in the {{synthetics-app}} :screenshot: ::: If the monitor is configured to [retest on failure](/solutions/observability/synthetics/configure-projects.md#synthetics-configuration-monitor), you’ll see retests listed in the **Test runs** table. Runs that are retests include a rerun icon (![Refresh icon](/solutions/images/observability-refresh.svg "")) next to the result badge. :::{image} /solutions/images/observability-synthetics-retest.png -:alt: A failed run and a retest in the table of test runs in the {synthetics-app} +:alt: A failed run and a retest in the table of test runs in the {{synthetics-app}} :screenshot: ::: @@ -93,7 +93,7 @@ The Errors tab includes a high-level overview of all alerts and a complete list For browser monitors, you can click on any run in the **Error** list to open an **Error details** page that includes most of the same information that is included the in [Details for one run](/solutions/observability/synthetics/analyze-data.md#synthetics-analyze-one-run) section below. :::{image} /solutions/images/observability-synthetics-analyze-individual-monitor-errors.png -:alt: The Errors tab on the individual monitor page for all monitor types in the {synthetics-app} +:alt: The Errors tab on the individual monitor page for all monitor types in the {{synthetics-app}} :screenshot: ::: @@ -115,7 +115,7 @@ The journey page on the Overview tab includes: * A list of the **last 10 test runs** that link to the [details for each run](/solutions/observability/synthetics/analyze-data.md#synthetics-analyze-one-run). :::{image} /solutions/images/observability-synthetics-analyze-journeys-over-time.png -:alt: Individual journey page for browser monitors in the {synthetics-app} +:alt: Individual journey page for browser monitors in the {{synthetics-app}} :screenshot: ::: @@ -133,7 +133,7 @@ At the top of the page, see the *Code executed* and any *Console* output for eac Navigate through each step using **![Previous icon](/solutions/images/observability-arrowLeft.svg "") Previous** and **Next ![Next icon](/solutions/images/observability-arrowRight.svg "")**. :::{image} /solutions/images/observability-synthetics-analyze-one-run-code-executed.png -:alt: Step carousel on a page detailing one run of a browser monitor in the {synthetics-app} +:alt: Step carousel on a page detailing one run of a browser monitor in the {{synthetics-app}} :screenshot: ::: @@ -161,7 +161,7 @@ Customize screenshot behavior for all monitors in the [configuration file](/solu Screenshots can be particularly helpful to identify what went wrong when a step fails because of a change to the UI. You can compare the failed step to the last time the step successfully completed. :::{image} /solutions/images/observability-synthetics-analyze-one-step-screenshot.png -:alt: Screenshot for one step in a browser monitor in the {synthetics-app} +:alt: Screenshot for one step in a browser monitor in the {{synthetics-app}} :screenshot: ::: @@ -182,7 +182,7 @@ Next to each network timing metric, there’s an icon that indicates whether the This gives you an overview of how much time is spent (and how that time is spent) loading resources. This high-level information may not help you diagnose a problem on its own, but it could act as a signal to look at more granular information in the [Network requests](/solutions/observability/synthetics/analyze-data.md#synthetics-analyze-one-step-network) section. :::{image} /solutions/images/observability-synthetics-analyze-one-step-timing.png -:alt: Network timing visualization for one step in a browser monitor in the {synthetics-app} +:alt: Network timing visualization for one step in a browser monitor in the {{synthetics-app}} :screenshot: ::: @@ -204,7 +204,7 @@ Largest contentful paint and Cumulative layout shift are part of Google’s [Cor Next to each metric, there’s an icon that indicates whether the value is higher (![Value is higher icon](/solutions/images/observability-sortUp.svg "")), lower (![Value is lower icon](/solutions/images/observability-sortDown.svg "")), or the same (![Value is the same](/solutions/images/observability-minus.svg "")) compared to all runs over the last 24 hours. Hover over the icon to see more details in a tooltip. :::{image} /solutions/images/observability-synthetics-analyze-one-step-metrics.png -:alt: Metrics visualization for one step in a browser monitor in the {synthetics-app} +:alt: Metrics visualization for one step in a browser monitor in the {{synthetics-app}} :screenshot: ::: @@ -215,7 +215,7 @@ The **Object weight** visualization shows the cumulative size of downloaded reso This provides a different kind of analysis. For example, you might have a large number of JavaScript files, each of which will need a separate download, but they may be collectively small. This could help you identify an opportunity to improve efficiency by combining multiple files into one. :::{image} /solutions/images/observability-synthetics-analyze-one-step-object.png -:alt: Object visualization for one step in a browser monitor in the {synthetics-app} +:alt: Object visualization for one step in a browser monitor in the {{synthetics-app}} :screenshot: ::: @@ -228,7 +228,7 @@ The colored bars within each line indicate the time spent per resource. Each col Understanding each phase of a request can help you improve your site’s speed by reducing the time spent in each phase. :::{image} /solutions/images/observability-synthetics-analyze-one-step-network.png -:alt: Network requests waterfall visualization for one step in a browser monitor in the {synthetics-app} +:alt: Network requests waterfall visualization for one step in a browser monitor in the {{synthetics-app}} :screenshot: ::: diff --git a/solutions/observability/synthetics/configure-lightweight-monitors.md b/solutions/observability/synthetics/configure-lightweight-monitors.md index db2de243a..13ba51fcb 100644 --- a/solutions/observability/synthetics/configure-lightweight-monitors.md +++ b/solutions/observability/synthetics/configure-lightweight-monitors.md @@ -640,7 +640,6 @@ $$$monitor-tcp-hosts$$$ * **A hostname and port, such as `localhost:12345`.** Synthetics connects to the port on the specified host. If the monitor is [configured to use SSL](beats://reference/heartbeat/configuration-ssl.md), Synthetics establishes an SSL/TLS-based connection. Otherwise, it establishes a TCP connection. * **A full URL using the syntax `scheme://:[port]`**, where: - * `scheme` is one of `tcp`, `plain`, `ssl` or `tls`. If `tcp` or `plain` is specified, Synthetics establishes a TCP connection even if the monitor is configured to use SSL. If `tls` or `ssl` is specified, Synthetics establishes an SSL connection. However, if the monitor is not configured to use SSL, the system defaults are used (currently not supported on Windows). * `host` is the hostname. * `port` is the port number. diff --git a/solutions/observability/synthetics/configure-settings.md b/solutions/observability/synthetics/configure-settings.md index 9ef4c5f10..30beb1e2f 100644 --- a/solutions/observability/synthetics/configure-settings.md +++ b/solutions/observability/synthetics/configure-settings.md @@ -49,7 +49,7 @@ You can enable and disable default alerts for individual monitors in a few ways: In the **Alerting** tab on the Synthetics Settings page, you can add and configure connectors. If you are running in Elastic Cloud, then an SMTP connector will automatically be configured, allowing you to easily set up email alerts. Read more about all available connectors in [Action types](/solutions/observability/incident-management/create-an-apm-anomaly-rule.md). :::{image} /solutions/images/observability-synthetics-settings-alerting.png -:alt: Alerting tab on the Synthetics Settings page in {kib} +:alt: Alerting tab on the Synthetics Settings page in {{kib}} :screenshot: ::: @@ -60,7 +60,7 @@ In the **Alerting** tab on the Synthetics Settings page, you can add and configu In the **{{private-location}}s** tab, you can add and manage {{private-location}}s. After you [Set up {{fleet-server}} and {{agent}}](/solutions/observability/synthetics/monitor-resources-on-private-networks.md#synthetics-private-location-fleet-agent) and [Connect to the {{stack}} or your serverless Observability project](/solutions/observability/synthetics/monitor-resources-on-private-networks.md#synthetics-private-location-connect), this is where you will add the {{private-location}} so you can specify it as the location for a monitor created using the Synthetics UI or a Synthetics project. :::{image} /solutions/images/observability-synthetics-settings-private-locations.png -:alt: {{private-location}}s tab on the Synthetics Settings page in {kib} +:alt: {{private-location}}s tab on the Synthetics Settings page in {{kib}} :screenshot: ::: @@ -71,7 +71,7 @@ Global parameters can be defined once and used across the configuration of light In the **Global parameters** tab, you can define variables and parameters. This is one of several methods you can use to define variables and parameters. To learn more about the other methods and which methods take precedence over others, see [Work with params and secrets](/solutions/observability/synthetics/work-with-params-secrets.md). :::{image} /solutions/images/observability-synthetics-settings-global-parameters.png -:alt: Global parameters tab on the Synthetics Settings page in {kib} +:alt: Global parameters tab on the Synthetics Settings page in {{kib}} :screenshot: ::: @@ -82,7 +82,7 @@ When you set up a synthetic monitor, data from the monitor is saved in [Elastics In the **Data retention** tab, use the links to jump to the relevant policy for each data stream. Learn more about the data included in each data stream in [Manage data retention](/solutions/observability/synthetics/manage-data-retention.md). :::{image} /solutions/images/observability-synthetics-settings-data-retention.png -:alt: Data retention tab on the Synthetics Settings page in {kib} +:alt: Data retention tab on the Synthetics Settings page in {{kib}} :screenshot: ::: @@ -100,6 +100,6 @@ In a serverless project, to create a Project API key you must be logged in as a :::: :::{image} /solutions/images/observability-synthetics-settings-api-keys.png -:alt: Project API keys tab on the Synthetics Settings page in {kib} +:alt: Project API keys tab on the Synthetics Settings page in {{kib}} :screenshot: ::: \ No newline at end of file diff --git a/solutions/observability/synthetics/create-monitors-ui.md b/solutions/observability/synthetics/create-monitors-ui.md index 3bfc8411e..651648ec7 100644 --- a/solutions/observability/synthetics/create-monitors-ui.md +++ b/solutions/observability/synthetics/create-monitors-ui.md @@ -48,7 +48,7 @@ To use the UI to add a lightweight monitor: If you’ve [added a {{private-location}}](/solutions/observability/synthetics/monitor-resources-on-private-networks.md), you’ll see your the {{private-location}} in the list of *Locations*. ```{image} /solutions/images/serverless-private-locations-monitor-locations.png - :alt: Screenshot of Monitor locations options including a {private-location} + :alt: Screenshot of Monitor locations options including a {{private-location}} :screenshot: ``` diff --git a/solutions/observability/synthetics/manage-data-retention.md b/solutions/observability/synthetics/manage-data-retention.md index 2146baebe..506630ec2 100644 --- a/solutions/observability/synthetics/manage-data-retention.md +++ b/solutions/observability/synthetics/manage-data-retention.md @@ -17,14 +17,14 @@ There are six data streams recorded by synthetic monitors: `http`, `tcp`, `icmp` There are six data streams recorded by synthetic monitors: -| Data stream | Data includes | Default retention period | | -| --- | --- | --- | --- | -| `http` | The URL that was checked, the status of the check, and any errors that occurred | 1 year | | -| `tcp` | The URL that was checked, the status of the check, and any errors that occurred | 1 year | | -| `icmp` | The URL that was checked, the status of the check, and any errors that occurred | 1 year | | -| `browser` | The URL that was checked, the status of the check, and any errors that occurred | 1 year | | -| `browser.screenshot` | Binary image data used to construct a screenshot and metadata with information related to de-duplicating this data | 14 days | | -| `browser.network` | Detailed metadata around requests for resources required by the pages being checked | 14 days | | +| Data stream | Data includes | Default retention period | +| --- | --- | --- | +| `http` | The URL that was checked, the status of the check, and any errors that occurred | 1 year | +| `tcp` | The URL that was checked, the status of the check, and any errors that occurred | 1 year | +| `icmp` | The URL that was checked, the status of the check, and any errors that occurred | 1 year | +| `browser` | The URL that was checked, the status of the check, and any errors that occurred | 1 year | +| `browser.screenshot` | Binary image data used to construct a screenshot and metadata with information related to de-duplicating this data | 14 days | +| `browser.network` | Detailed metadata around requests for resources required by the pages being checked | 14 days | All types of checks record core metadata. Browser-based checks store two additional types of data: network and screenshot documents. These browser-specific indices are usually many times larger than the core metadata. The relative sizes of each vary depending on the sites being checked with network data usually being the larger of the two by a significant factor. diff --git a/solutions/observability/uptime/get-started.md b/solutions/observability/uptime/get-started.md index 55e422d75..b4646fd44 100644 --- a/solutions/observability/uptime/get-started.md +++ b/solutions/observability/uptime/get-started.md @@ -38,7 +38,9 @@ If you’ve used the Elastic Synthetics integration to create monitors in the pa Elastic provides Docker images that you can use to run monitors. Start by pulling the {{heartbeat}} Docker image. -Version 9.0.0-beta1 has not yet been released. +```sh subs=true +docker pull docker.elastic.co/beats/heartbeat:{{version}} +``` ## Configure [uptime-set-up-config] @@ -72,7 +74,55 @@ If you previously used {{heartbeat}} to set up **`browser`** monitor, you can fi After configuring the monitor, run it in Docker and connect the monitor to the {{stack}}. -Version 9.0.0-beta1 has not yet been released. +You'll need to retrieve your {{es}} credentials for either an [{{ecloud}} ID](beats://reference/heartbeat/configure-cloud-id.md) or another [{{es}} Cluster](beats://reference/heartbeat/elasticsearch-output.md). + +The example below, shows how to run synthetics tests indexing data into {{es}}. +You'll need to insert your actual `cloud.id` and `cloud.auth` values to successfully index data to your cluster. + +% We do NOT use <1> references in the below example, because they create whitespace after the trailing \ +% when copied into a shell, which creates mysterious errors when copy and pasting! + +```sh subs=true +docker run \ + --rm \ + --name=heartbeat \ + --user=heartbeat \ + --volume="$PWD/heartbeat.yml:/usr/share/heartbeat/heartbeat.yml:ro" \ + --cap-add=NET_RAW \ + docker.elastic.co/beats/heartbeat:{{version}} heartbeat -e \ + -E cloud.id={cloud-id} \ + -E cloud.auth=elastic:{cloud-pass} +``` + +If you aren't using {{ecloud}}, replace `-E cloud.id` and `-E cloud.auth` with your {{es}} hosts, +username, and password: + +```sh subs=true +docker run \ + --rm \ + --name=heartbeat \ + --user=heartbeat \ + --volume="$PWD/heartbeat.yml:/usr/share/heartbeat/heartbeat.yml:ro" \ + --cap-add=NET_RAW \ + docker.elastic.co/beats/heartbeat:{{version}} heartbeat -e \ + -E output.elasticsearch.hosts=["localhost:9200"] \ + -E output.elasticsearch.username=elastic \ + -E output.elasticsearch.password=changeme +``` + +Note the `--volume` option, which mounts local directories into the +container. Here, we mount the `heartbeat.yml` from the working directory, +into {{heartbeat}}'s expected location for `heartbeat.yml`. + +:::{warning} +Elastic Synthetics runs Chromium without the extra protection of its process +[sandbox](https://chromium.googlesource.com/chromium/src/+/master/docs/linux/sandboxing.md) +for greater compatibility with Linux server distributions. +Add the `sandbox: true` option to a given browser monitor in {{heartbeat}} to enable sandboxing. +This may require using a custom seccomp policy with docker, which brings its own added risks. +This is generally safe when run against sites whose content you trust, +and with a recent version of Elastic Synthetics and Chromium. +::: ## View in {{kib}} [uptime-set-up-kibana] diff --git a/solutions/search/esql-for-search.md b/solutions/search/esql-for-search.md index 44a7ff7eb..b7a995079 100644 --- a/solutions/search/esql-for-search.md +++ b/solutions/search/esql-for-search.md @@ -144,7 +144,7 @@ FROM articles METADATA _score ```esql FROM books METADATA _score -| WHERE match(semantic_title, "fantasy adventure", { "boost": 0.75 }) +| WHERE match(semantic_title, "fantasy adventure", { "boost": 0.75 }) OR match(title, "fantasy adventure", { "boost": 0.25 }) | SORT _score DESC ``` @@ -157,7 +157,7 @@ Refer to [{{esql}} limitations](elasticsearch://reference/query-languages/esql/l ### Tutorials and how-to guides [esql-for-search-tutorials] -- [Search and filter with {{esql}}](esql-search-tutorial.md): Hands-on tutorial for getting started with search tools in {esql} +- [Search and filter with {{esql}}](esql-search-tutorial.md): Hands-on tutorial for getting started with search tools in {{esql}} - [Semantic search with semantic_text](semantic-search/semantic-search-semantic-text.md): Learn how to use the `semantic_text` field type ### Technical reference [esql-for-search-reference] @@ -173,5 +173,5 @@ Refer to [{{esql}} limitations](elasticsearch://reference/query-languages/esql/l ### Related blog posts [esql-for-search-blogs] -https://www.elastic.co/search-labs/blog/esql-introducing-scoring-semantic-search[{{esql}}, you know for Search]: Introducing scoring and semantic search +- [{{esql}}, you know for Search](https://www.elastic.co/search-labs/blog/esql-introducing-scoring-semantic-search): Introducing scoring and semantic search - [Introducing full text filtering in {{esql}}](https://www.elastic.co/search-labs/blog/filtering-in-esql-full-text-search-match-qstr): Overview of {{esql}}'s text filtering capabilities diff --git a/solutions/search/hybrid-semantic-text.md b/solutions/search/hybrid-semantic-text.md index caa607455..384f71e88 100644 --- a/solutions/search/hybrid-semantic-text.md +++ b/solutions/search/hybrid-semantic-text.md @@ -55,7 +55,7 @@ In this step, you load the data that you later use to create embeddings from. Use the `msmarco-passagetest2019-top1000` data set, which is a subset of the MS MARCO Passage Ranking data set. It consists of 200 queries, each accompanied by a list of relevant text passages. All unique passages, along with their IDs, have been extracted from that data set and compiled into a [tsv file](https://github.com/elastic/stack-docs/blob/main/docs/en/stack/ml/nlp/data/msmarco-passagetest2019-unique.tsv). -Download the file and upload it to your cluster using the [Data Visualizer](../../manage-data/ingest/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents. +Download the file and upload it to your cluster using the [Data Visualizer](../../manage-data/ingest/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names**, assign `id` to the first column and `content` to the second. Click **Apply**, then **Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents. ## Reindex the data for hybrid search [hybrid-search-reindex-data] diff --git a/solutions/search/rag/playground.md b/solutions/search/rag/playground.md index bec09040f..53c1e8578 100644 --- a/solutions/search/rag/playground.md +++ b/solutions/search/rag/playground.md @@ -141,7 +141,7 @@ There are many options for ingesting data into {{es}}, including: * [Elastic connectors](elasticsearch://reference/search-connectors/index.md) for data synced from third-party sources * The {{es}} [Bulk API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) for JSON documents - ::::{dropdown} **Expand** for example + ::::{dropdown} Expand for example To add a few documents to an index called `books` run the following in Dev Tools Console: ```console diff --git a/solutions/search/semantic-search/semantic-search-elser-ingest-pipelines.md b/solutions/search/semantic-search/semantic-search-elser-ingest-pipelines.md index d95b9a3a1..aea3177b1 100644 --- a/solutions/search/semantic-search/semantic-search-elser-ingest-pipelines.md +++ b/solutions/search/semantic-search/semantic-search-elser-ingest-pipelines.md @@ -102,7 +102,7 @@ The `msmarco-passagetest2019-top1000` dataset was not utilized to train the mode :::: -Download the file and upload it to your cluster using the [File Uploader](../../../manage-data/ingest/upload-data-files.md) in the UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents. +Download the file and upload it to your cluster using the [File Uploader](../../../manage-data/ingest/upload-data-files.md) in the UI. After your data is analyzed, click **Override settings**. Under **Edit field names**, assign `id` to the first column and `content` to the second. Click **Apply**, then **Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents. ### Ingest the data through the {{infer}} ingest pipeline [reindexing-data-elser] diff --git a/solutions/search/semantic-search/semantic-search-inference.md b/solutions/search/semantic-search/semantic-search-inference.md index 5e435893f..63c4f3977 100644 --- a/solutions/search/semantic-search/semantic-search-inference.md +++ b/solutions/search/semantic-search/semantic-search-inference.md @@ -830,7 +830,7 @@ In this step, you load the data that you later use in the {{infer}} ingest pipel Use the `msmarco-passagetest2019-top1000` data set, which is a subset of the MS MARCO Passage Ranking data set. It consists of 200 queries, each accompanied by a list of relevant text passages. All unique passages, along with their IDs, have been extracted from that data set and compiled into a [tsv file](https://github.com/elastic/stack-docs/blob/main/docs/en/stack/ml/nlp/data/msmarco-passagetest2019-unique.tsv). -Download the file and upload it to your cluster using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents. +Download the file and upload it to your cluster using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names**, assign `id` to the first column and `content` to the second. Click **Apply**, then **Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents. ## Ingest the data through the {{infer}} ingest pipeline [reindexing-data-infer] diff --git a/solutions/search/semantic-search/semantic-search-semantic-text.md b/solutions/search/semantic-search/semantic-search-semantic-text.md index 2c8e35cdf..17027c4eb 100644 --- a/solutions/search/semantic-search/semantic-search-semantic-text.md +++ b/solutions/search/semantic-search/semantic-search-semantic-text.md @@ -54,7 +54,7 @@ In this step, you load the data that you later use to create embeddings from it. Use the `msmarco-passagetest2019-top1000` data set, which is a subset of the MS MARCO Passage Ranking data set. It consists of 200 queries, each accompanied by a list of relevant text passages. All unique passages, along with their IDs, have been extracted from that data set and compiled into a [tsv file](https://github.com/elastic/stack-docs/blob/main/docs/en/stack/ml/nlp/data/msmarco-passagetest2019-unique.tsv). -Download the file and upload it to your cluster using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents. +Download the file and upload it to your cluster using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names**, assign `id` to the first column and `content` to the second. Click **Apply**, then **Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents. ## Reindex the data [semantic-text-reindex-data] diff --git a/solutions/security/configure-elastic-defend/configure-offline-endpoints-air-gapped-environments.md b/solutions/security/configure-elastic-defend/configure-offline-endpoints-air-gapped-environments.md index ea1dfbd4a..7d65b2d14 100644 --- a/solutions/security/configure-elastic-defend/configure-offline-endpoints-air-gapped-environments.md +++ b/solutions/security/configure-elastic-defend/configure-offline-endpoints-air-gapped-environments.md @@ -183,8 +183,8 @@ Download the most recent artifact files from the Elastic global artifact server, Below is an example script that downloads all the global artifact updates. There are different artifact files for each version of {{elastic-endpoint}}. Change the value of the `ENDPOINT_VERSION` variable in the example script to match the deployed version of {{elastic-endpoint}}. -```sh -export ENDPOINT_VERSION=9.0.0-beta1 && wget -P downloads/endpoint/manifest https://artifacts.security.elastic.co/downloads/endpoint/manifest/artifacts-$ENDPOINT_VERSION.zip && zcat -q downloads/endpoint/manifest/artifacts-$ENDPOINT_VERSION.zip | jq -r '.artifacts | to_entries[] | .value.relative_url' | xargs -I@ curl "https://artifacts.security.elastic.co@" --create-dirs -o ".@" +```sh subs=true +export ENDPOINT_VERSION={{version}} && wget -P downloads/endpoint/manifest https://artifacts.security.elastic.co/downloads/endpoint/manifest/artifacts-$ENDPOINT_VERSION.zip && zcat -q downloads/endpoint/manifest/artifacts-$ENDPOINT_VERSION.zip | jq -r '.artifacts | to_entries[] | .value.relative_url' | xargs -I@ curl "https://artifacts.security.elastic.co@" --create-dirs -o ".@" ``` This command will download files and directory structure that should be directly copied to the file server. @@ -198,8 +198,8 @@ Each new global artifact update release increments a version identifier that you To confirm the latest version of the artifacts for a given {{elastic-endpoint}} version, check the published version. This example script checks the version: -```sh -curl -s https://artifacts.security.elastic.co/downloads/endpoint/manifest/artifacts-9.0.0-beta1.zip | zcat -q | jq -r .manifest_version +```sh subs=true +curl -s https://artifacts.security.elastic.co/downloads/endpoint/manifest/artifacts-{{version}}.zip | zcat -q | jq -r .manifest_version ``` Replace `https://artifacts.security.elastic.co` in the command above with your local mirror server to validate that the artifacts are served correctly. diff --git a/solutions/security/configure-elastic-defend/install-elastic-defend.md b/solutions/security/configure-elastic-defend/install-elastic-defend.md index 38c078edf..22cd88048 100644 --- a/solutions/security/configure-elastic-defend/install-elastic-defend.md +++ b/solutions/security/configure-elastic-defend/install-elastic-defend.md @@ -59,7 +59,7 @@ If you’re using macOS, some versions may require you to grant Full Disk Access | | | | --- | --- | - | **Traditional Endpoint presets** | All traditional endpoint presets *except **Data Collection*** have these preventions enabled by default: malware, ransomware, memory threat, malicious behavior, and credential theft. Each preset collects the following events:

- **Data Collection:** All events; no preventions
- **Next-Generation Antivirus (NGAV):** Process events; all preventions
- **Essential EDR (Endpoint Detection & Response):** Process, Network, File events; all preventions
- **Complete EDR (Endpoint Detection & Response):** All events; all preventions
| + | **Traditional Endpoint presets** | All traditional endpoint presets *except **Data Collection** have these preventions enabled by default: malware, ransomware, memory threat, malicious behavior, and credential theft. Each preset collects the following events:

- **Data Collection:** All events; no preventions
- **Next-Generation Antivirus (NGAV):** Process events; all preventions
- **Essential EDR (Endpoint Detection & Response):** Process, Network, File events; all preventions
- **Complete EDR (Endpoint Detection & Response):** All events; all preventions
| | **Cloud Workloads presets** | Both cloud workload presets are intended for monitoring cloud-based Linux hosts. Therefore, [session data](/solutions/security/investigate/session-view.md) collection, which enriches process events, is enabled by default. They both have all preventions disabled by default, and collect process, network, and file events.

- **All events:** Includes data from automated sessions.
- **Interactive only:** Filters out data from non-interactive sessions by creating an [event filter](/solutions/security/manage-elastic-defend/event-filters.md).
| 6. Enter a name for the agent policy in **New agent policy name**. If other agent policies already exist, you can click the **Existing hosts** tab and select an existing policy instead. For more details on {{agent}} configuration settings, refer to [{{agent}} policies](/reference/fleet/agent-policy.md). diff --git a/solutions/security/detect-and-alert.md b/solutions/security/detect-and-alert.md index e67c33d0f..4a0066115 100644 --- a/solutions/security/detect-and-alert.md +++ b/solutions/security/detect-and-alert.md @@ -59,7 +59,7 @@ Cold [data tiers](/manage-data/lifecycle/data-tiers.md) store time series data t Data tiers are a powerful and useful tool. When using them, keep the following in mind: -* To avoid rule failures, do not modify {{ilm}} policies for {elastic-sec}-controlled indices, such as alert and list indices. +* To avoid rule failures, do not modify {{ilm}} policies for {{elastic-sec}}-controlled indices, such as alert and list indices. * Source data must have an {{ilm}} policy that keeps it in the hot or warm tiers for at least 24 hours before moving to cold or frozen tiers. ## Limited support for indicator match rules [support-indicator-rules] diff --git a/solutions/security/detect-and-alert/create-manage-shared-exception-lists.md b/solutions/security/detect-and-alert/create-manage-shared-exception-lists.md index 89ace44be..33b1c5c5b 100644 --- a/solutions/security/detect-and-alert/create-manage-shared-exception-lists.md +++ b/solutions/security/detect-and-alert/create-manage-shared-exception-lists.md @@ -105,7 +105,7 @@ Apply shared exception lists to rules: 4. Click **Save**. 5. (Optional) To verify that the shared exception list was added to the rules you selected: - 1. Open a rule’s details page (**Rules → Detection rules (SIEM) → *Rule name***). + 1. Open a rule’s details page (**Rules → Detection rules (SIEM) → Rule name**). 2. Scroll down the page, and then select the **Rule exceptions** tab. 3. Navigate to the exception items that are included in the shared exception list. Click the **Affects shared list** link to view the associated shared exception lists. diff --git a/solutions/security/detect-and-alert/manage-detection-alerts.md b/solutions/security/detect-and-alert/manage-detection-alerts.md index ae8564314..8fd13a8d8 100644 --- a/solutions/security/detect-and-alert/manage-detection-alerts.md +++ b/solutions/security/detect-and-alert/manage-detection-alerts.md @@ -35,7 +35,7 @@ The Alerts page offers various ways for you to organize and triage detection ale * Use the date and time filter to define a specific time range. By default, this filter is set to search the last 24 hours. * Use the drop-down filter controls to filter alerts by up to four fields. By default, you can filter alerts by **Status**, **Severity**, **User**, and **Host**, and you can [edit the controls](/solutions/security/detect-and-alert/manage-detection-alerts.md#drop-down-filter-controls) to use other fields. * Visualize and group alerts by specific fields in the visualization section. Use the buttons on the left to select a view type (**Summary**, **Trend**, **Counts**, or **Treemap**), and use the menus on the right to select the ECS fields used for grouping alerts. Refer to [Visualize detection alerts](/solutions/security/detect-and-alert/visualize-detection-alerts.md) for more on each view type. -* Hover over a value to display available [inline actions](/solutions/security/get-started/elastic-security-ui.md#inline-actions). Click the expand icon for more options, including **Show top *x*** and **Copy to Clipboard**. The available options vary based on the type of data. +* Hover over a value to display available [inline actions](/solutions/security/get-started/elastic-security-ui.md#inline-actions). Click the expand icon for more options, including **Show top _x_** and **Copy to Clipboard**. The available options vary based on the type of data. :::{image} /solutions/images/security-inline-actions-menu.png :alt: Inline additional actions menu @@ -122,7 +122,7 @@ To interact with grouped alerts: Use the toolbar buttons in the upper-left of the Alerts table to customize the columns you want displayed: * **Columns**: Reorder the columns. -* **Sort fields *x***: Sort the table by one or more columns. +* **Sort fields _x_**: Sort the table by one or more columns. * **Fields**: Select the fields to display in the table. You can also add [runtime fields](/solutions/security/get-started/create-runtime-fields-in-elastic-security.md) to detection alerts and display them in the Alerts table. Click the **Full screen** button in the upper-right to view the table in full-screen mode. diff --git a/solutions/security/detect-and-alert/manage-detection-rules.md b/solutions/security/detect-and-alert/manage-detection-rules.md index 9bb2e1717..f9ee882b6 100644 --- a/solutions/security/detect-and-alert/manage-detection-rules.md +++ b/solutions/security/detect-and-alert/manage-detection-rules.md @@ -105,7 +105,7 @@ For {{ml}} rules, an indicator icon (![Error icon from rules table](/solutions/i To [snooze](/solutions/security/detect-and-alert/manage-detection-rules.md#snooze-rule-actions) rule actions, go to the **Actions** tab and click the bell icon. :::: -4. If available, select **Overwrite all selected *x*** to overwrite the settings on the rules. For example, if you’re adding tags to multiple rules, selecting **Overwrite all selected rules tags** removes all the rules' original tags and replaces them with the tags you specify. +4. If available, select **Overwrite all selected _x_** to overwrite the settings on the rules. For example, if you’re adding tags to multiple rules, selecting **Overwrite all selected rules tags** removes all the rules' original tags and replaces them with the tags you specify. 5. Click **Save**. diff --git a/solutions/security/get-started/elastic-security-ui.md b/solutions/security/get-started/elastic-security-ui.md index 8afb5b34e..3c3f60af2 100644 --- a/solutions/security/get-started/elastic-security-ui.md +++ b/solutions/security/get-started/elastic-security-ui.md @@ -79,7 +79,7 @@ Inline actions include the following (some actions are unavailable in some conte * **Filter Out**: Add a filter that excludes the selected value. * **Add to timeline**: Add a filter to Timeline for the selected value. * **Toggle column in table**: Add or remove the selected field as a column in the alerts or events table. (This action is only available on an alert’s or event’s details flyout.) -* **Show top *x***: Display a pop-up window that shows the selected field’s top events or detection alerts. +* **Show top _x_**: Display a pop-up window that shows the selected field’s top events or detection alerts. * **Copy to Clipboard**: Copy the selected field-value pair to paste elsewhere. diff --git a/solutions/security/investigate/visual-event-analyzer.md b/solutions/security/investigate/visual-event-analyzer.md index fbde3b71a..2225a7bc1 100644 --- a/solutions/security/investigate/visual-event-analyzer.md +++ b/solutions/security/investigate/visual-event-analyzer.md @@ -171,7 +171,7 @@ When you select an `event.category` pill, all the events within that category ar :::: -To examine alerts associated with the event, select the alert pill (***x* alert**). The left pane lists the total number of associated alerts, and alerts are ordered from oldest to newest. Each alert shows the type of event that produced it (`event.category`), the event timestamp (`@timestamp`), and rule that generated the alert (`kibana.alert.rule.name`). Click on the rule name to open the alert’s details. +To examine alerts associated with the event, select the alert pill (**_x_ alert**). The left pane lists the total number of associated alerts, and alerts are ordered from oldest to newest. Each alert shows the type of event that produced it (`event.category`), the event timestamp (`@timestamp`), and rule that generated the alert (`kibana.alert.rule.name`). Click on the rule name to open the alert’s details. In the example screenshot below, five alerts were generated by the analyzed event (`lsass.exe`). The left pane displays the associated alerts and basic information about each one. diff --git a/troubleshoot/elasticsearch/circuit-breaker-errors.md b/troubleshoot/elasticsearch/circuit-breaker-errors.md index dbe523096..bf5a14e97 100644 --- a/troubleshoot/elasticsearch/circuit-breaker-errors.md +++ b/troubleshoot/elasticsearch/circuit-breaker-errors.md @@ -1,11 +1,11 @@ --- applies_to: - stack: + stack: deployment: - eck: - ess: - ece: - self: + eck: + ess: + ece: + self: mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/circuit-breaker-errors.html --- @@ -51,7 +51,7 @@ Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] D **Check JVM memory usage** -If you’ve enabled Stack Monitoring, you can view JVM memory usage in {{kib}}. In the main menu, click **Stack Monitoring**. On the Stack Monitoring **Overview*** page, click ***Nodes**. The **JVM Heap** column lists the current memory usage for each node. +If you’ve enabled Stack Monitoring, you can view JVM memory usage in {{kib}}. In the main menu, click **Stack Monitoring**. On the Stack Monitoring **Overview** page, click **Nodes**. The **JVM Heap** column lists the current memory usage for each node. You can also use the [cat nodes API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) to get the current `heap.percent` for each node. diff --git a/troubleshoot/elasticsearch/diagnostic.md b/troubleshoot/elasticsearch/diagnostic.md index bb2006ada..fdf0ef083 100644 --- a/troubleshoot/elasticsearch/diagnostic.md +++ b/troubleshoot/elasticsearch/diagnostic.md @@ -1,11 +1,11 @@ --- applies_to: - stack: + stack: deployment: - eck: - ess: - ece: - self: + eck: + ess: + ece: + self: navigation_title: Diagnostics mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/diagnostic.html @@ -37,7 +37,7 @@ If you're using {{ech}}, you can use AutoOps to monitor your cluster. AutoOps si The Support Diagnostic tool is included as a sub-library in some Elastic deployments: -* {{ece}}: Located under **{{ece}}** > **Deployment*** > ***Operations*** > ***Prepare Bundle** > **{{es}}**. +* {{ece}}: Located under **{{ece}}** > **Deployment** > **Operations** > **Prepare Bundle** > **{{es}}**. * {{eck}}: Run as [`eck-diagnostics`](/troubleshoot/deployments/cloud-on-k8s/run-eck-diagnostics.md). You can also directly download the `diagnostics-X.X.X-dist.zip` file for the latest Support Diagnostic release from [the `support-diagnostic` repo](https://github.com/elastic/support-diagnostics/releases/latest). diff --git a/troubleshoot/ingest/beats-loggingplugin/elastic-logging-plugin-for-docker.md b/troubleshoot/ingest/beats-loggingplugin/elastic-logging-plugin-for-docker.md index 875a45070..6a043bbcf 100644 --- a/troubleshoot/ingest/beats-loggingplugin/elastic-logging-plugin-for-docker.md +++ b/troubleshoot/ingest/beats-loggingplugin/elastic-logging-plugin-for-docker.md @@ -12,22 +12,22 @@ You can set the debug level to capture debugging output about the Elastic Loggin 1. Disable the plugin: - ```sh - docker plugin disable elastic/elastic-logging-plugin:9.0.0-beta1 + ```sh subs=true + docker plugin disable elastic/elastic-logging-plugin:{{version}} ``` 2. Set the debug level: - ```sh - docker plugin set elastic/elastic-logging-plugin:9.0.0-beta1 LOG_DRIVER_LEVEL=debug + ```sh subs=true + docker plugin set elastic/elastic-logging-plugin:{{version}} LOG_DRIVER_LEVEL=debug ``` Where valid settings for `LOG_DRIVER_LEVEL` are `debug`, `info`, `warning`, or `error`. 3. Enable the plugin: - ```sh - docker plugin enable elastic/elastic-logging-plugin:9.0.0-beta1 + ```sh subs=true + docker plugin enable elastic/elastic-logging-plugin:{{version}} ``` diff --git a/troubleshoot/ingest/elastic-serverless-forwarder.md b/troubleshoot/ingest/elastic-serverless-forwarder.md index 901d2a7bf..b87a14163 100644 --- a/troubleshoot/ingest/elastic-serverless-forwarder.md +++ b/troubleshoot/ingest/elastic-serverless-forwarder.md @@ -37,7 +37,7 @@ To help with debugging, you can increase the amount of logging detail by adding 1. Select the serverless forwarder **application** from **Lambda > Functions** 2. Click **Configuration** and select **Environment Variables** and choose **Edit** -3. Click **Add environment variable** and enter `LOG_LEVEL` as **Key*** and `DEBUG` as ***Value** and click **Save** +3. Click **Add environment variable** and enter `LOG_LEVEL` as **Key** and `DEBUG` as **Value** and click **Save** ## Using the Event ID format (version 1.6.0 and above) [aws-serverless-troubleshooting-event-id-format] diff --git a/troubleshoot/ingest/logstash/kafka.md b/troubleshoot/ingest/logstash/kafka.md index 873b027cd..5f95c6b37 100644 --- a/troubleshoot/ingest/logstash/kafka.md +++ b/troubleshoot/ingest/logstash/kafka.md @@ -92,9 +92,6 @@ For Kafka Broker versions 0.10.2.1 to 1.0.x: The problem is caused by a bug in K For older versions of Kafka or if the above does not fully resolve the issue: The problem can also be caused by setting the value for `poll_timeout_ms` too low relative to the rate at which the Kafka Brokers receive events themselves (or if Brokers periodically idle between receiving bursts of events). Increasing the value set for `poll_timeout_ms` proportionally decreases the number of offsets commits in this scenario. For example, raising it by 10x will lead to 10x fewer offset commits. - -# - **Symptoms** Logstash Kafka input randomly logs errors from the configured codec and/or reads events incorrectly (partial reads, mixing data between multiple events etc.). diff --git a/troubleshoot/kibana/alerts.md b/troubleshoot/kibana/alerts.md index 72b9b65e8..f6420666f 100644 --- a/troubleshoot/kibana/alerts.md +++ b/troubleshoot/kibana/alerts.md @@ -203,7 +203,7 @@ This approach should be used only temporarily as a last resort to restore functi ## Limitations [alerting-limitations] -The following limitations and known problems apply to the 9.0.0-beta1 release of the {{kib}} {{alert-features}}: +The following limitations and known problems apply to the {{version}} release of the {{kib}} {{alert-features}}: ### Alert visibility [_alert_visibility] diff --git a/troubleshoot/kibana/capturing-diagnostics.md b/troubleshoot/kibana/capturing-diagnostics.md index e5fda597a..da4f8d513 100644 --- a/troubleshoot/kibana/capturing-diagnostics.md +++ b/troubleshoot/kibana/capturing-diagnostics.md @@ -31,7 +31,7 @@ You can generate diagnostic information using this tool before you contact [Elas The Support Diagnostic tool is included out-of-the-box as a sub-library in: -* {{ece}} - Find the tool under **{{ece}}*** > ***Deployment*** > ***Operations*** > ***Prepare Bundle*** > ***{{kib}}**. +* {{ece}} - Find the tool under **{{ece}}** > **Deployment** > **Operations** > **Prepare Bundle** > **{{kib}}**. * {{eck}} - Run the tool with [`eck-diagnostics`](/troubleshoot/deployments/cloud-on-k8s/run-eck-diagnostics.md). You can also get the latest version of the tool by downloading the `diagnostics-X.X.X-dist.zip` file from [the `support-diagnostic` repo](https://github.com/elastic/support-diagnostics/releases/latest). diff --git a/troubleshoot/kibana/migration-failures.md b/troubleshoot/kibana/migration-failures.md index dcc4f4ce0..1bf1ced45 100644 --- a/troubleshoot/kibana/migration-failures.md +++ b/troubleshoot/kibana/migration-failures.md @@ -93,8 +93,9 @@ You can configure {{kib}} to automatically discard all corrupt objects and trans migrations.discardCorruptObjects: "8.4.0" ``` -**WARNING:** Enabling the flag above will cause the system to discard all corrupt objects, as well as those causing transform errors. Thus, it is HIGHLY recommended that you carefully review the list of conflicting objects in the logs. - +:::{warning} +Enabling the flag above will cause the system to discard all corrupt objects, as well as those causing transform errors. Thus, it is HIGHLY recommended that you carefully review the list of conflicting objects in the logs. +::: ## Documents for unknown saved objects [unknown-saved-object-types] diff --git a/troubleshoot/observability/apm-agent-dotnet/apm-net-agent.md b/troubleshoot/observability/apm-agent-dotnet/apm-net-agent.md index 1d74c723e..74e8e3564 100644 --- a/troubleshoot/observability/apm-agent-dotnet/apm-net-agent.md +++ b/troubleshoot/observability/apm-agent-dotnet/apm-net-agent.md @@ -147,7 +147,7 @@ namespace MyApp } ``` -During initialization, the agent checks if an additional logger was configured-- the agent only does this once, so it’s important to set it as early in the process as possible, typically in the `Application_Start` method. +During initialization, the agent checks if an additional logger was configured—the agent only does this once, so it’s important to set it as early in the process as possible, typically in the `Application_Start` method. ### General .NET applications [collect-logs-general]