-
Notifications
You must be signed in to change notification settings - Fork 200
Update ES exporter docs #11017
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Update ES exporter docs #11017
Changes from 4 commits
b2ae5cf
df1a1cf
5465047
63fa31b
8d60fc4
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -131,32 +131,27 @@ The `elasticsearch.index` attribute is removed from the final document if it exi | |
|
|
||
| ## Performance and batching | ||
|
|
||
| ### Using sending queue | ||
|
|
||
| The {{es}} exporter supports the `sending_queue` setting, which supports both queueing and batching. The sending queue is deactivated by default. | ||
|
|
||
| You can turn on the sending queue by setting `sending_queue::enabled` to `true`: | ||
|
|
||
| ```yaml subs=true | ||
| exporters: | ||
| elasticsearch: | ||
| endpoint: https://elasticsearch:9200 | ||
| sending_queue: | ||
| enabled: true | ||
| ### Queuing and batching | ||
|
|
||
| The {{es}} exporter supports the common `sending_queue` settings, which enable both queuing and batching. The default sending queue is configured to do async batching with the following configuration: | ||
|
|
||
| ```yaml | ||
| sending_queue: | ||
| enabled: true | ||
| num_consumers: runtime.NumCPU() | ||
| queue_size: <based on queue.mem.events> | ||
| block_on_overflow: true | ||
| wait_for_result: true | ||
| batch: | ||
| flush_timeout: 10s | ||
| min_size: 0 | ||
| max_size: <based on flush::bytes> | ||
| sizer: items | ||
| ``` | ||
|
|
||
| ### Internal batching (default) | ||
|
|
||
| By default, the exporter performs its own buffering and batching, as configured through the `flush` setting, unless the `sending_queue::batch` or the `batcher` settings are defined. In that case, batching is controlled by either of the two settings, depending on the version. | ||
|
|
||
| ### Custom batching | ||
| The default configurations are chosen to be closer to the defaults with the exporter's previous built-in batching feature. For more details on the `sending_queue` settings, refer to the [`exporterhelper` documentation](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/exporterhelper/README.md). | ||
|
Comment on lines
136
to
153
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. i'm afraid these need to be in a applies_to 9.3
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. All the default configs? If that's the case, we could put the PR on hold till 9.3 is up. |
||
|
|
||
| ::::{applies-switch} | ||
|
|
||
| :::{applies-item} stack: ga 9.2 | ||
| Batching support in sending queue is deactivated by default. To turn it on, enable sending queue and define `sending_queue::batch`. | ||
|
|
||
| For example: | ||
| You can customize the sending queue configuration: | ||
|
|
||
| ```yaml subs=true | ||
| exporters: | ||
|
|
@@ -167,11 +162,18 @@ exporters: | |
| batch: | ||
| min_size: 1000 | ||
| max_size: 10000 | ||
| timeout: 5s | ||
| flush_timeout: 5s | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. good catch |
||
| sizer: items | ||
| ``` | ||
| ::: | ||
|
|
||
| :::{applies-item} stack: ga 9.0, deprecated 9.2 | ||
| ### Deprecated batcher configuration | ||
| ```{applies_to} | ||
| stack: ga 9.0, deprecated 9.2 | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FYI, batcher is now removed from the config. Do we add something here for this for the 9.3 version to signal that batcher is no longer there? |
||
| ``` | ||
|
|
||
| :::{warning} | ||
| The `batcher` configuration is deprecated and will be removed in a future version. Use `sending_queue::batch` instead. | ||
| ::: | ||
|
|
||
| Batching can be enabled and configured with the `batcher` section, using [common `batcher` settings](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/exporterhelper/internal/queue_sender.go). | ||
|
|
||
|
|
@@ -184,7 +186,7 @@ Batching can be enabled and configured with the `batcher` section, using [common | |
|
|
||
| For example: | ||
|
|
||
| ```yaml subs=true | ||
| ```yaml | ||
| exporters: | ||
| elasticsearch: | ||
| endpoint: https://elasticsearch:9200 | ||
|
|
@@ -194,16 +196,14 @@ exporters: | |
| max_size: 10000 | ||
| flush_timeout: 5s | ||
| ``` | ||
| ::: | ||
| :::: | ||
|
|
||
| ## Bulk indexing | ||
|
|
||
| The Elasticsearch exporter uses the [Elasticsearch Bulk API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) for indexing documents. Configure the behavior of bulk indexing with the following settings: | ||
|
|
||
| | Setting | Default | Description | | ||
| |---------|---------|-------------| | ||
| | `num_workers` | `runtime.NumCPU()` | Number of workers publishing bulk requests concurrently. Note this isn't applicable if `batcher::enabled` is `true` or `false`. | | ||
| | `num_workers` | `runtime.NumCPU()` | Number of workers publishing bulk requests concurrently. Note this isn't applicable when using `sending_queue` (enabled by default) or when `batcher::enabled` is explicitly set. | | ||
theletterf marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| | `flush::bytes` | `5000000` | Write buffer flush size limit before compression. A bulk request are sent immediately when its buffer exceeds this limit. This value should be much lower than Elasticsearch's `http.max_content_length` config to avoid HTTP 413 Entity Too Large error. Keep this value under 5 MB. | | ||
| | `flush::interval` | `10s` | Write buffer flush time limit. | | ||
| | `retry::enabled` | `true` | Turns on or off request retry on error. Failed requests are retried with exponential backoff. | | ||
|
|
@@ -214,7 +214,7 @@ The Elasticsearch exporter uses the [Elasticsearch Bulk API](https://www.elastic | |
| | `retry::retry_on_status` | `[429]` | Status codes that trigger request or document level retries. Request level retry and document level retry status codes are shared and cannot be configured separately. To avoid duplicates, it defaults to `[429]`. | | ||
|
|
||
| :::{note} | ||
| The `flush::interval` config is ignored when `batcher::enabled` config is explicitly set to true or false. | ||
| The `flush::interval` config is ignored when using `sending_queue` (enabled by default) or when `batcher::enabled` config is explicitly set. | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this statement is either correct or incorrect depending on which version we're on, but this line doesn't have
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Which version would it be true for? |
||
| ::: | ||
|
|
||
| Starting from Elasticsearch 8.18 and higher, the [`include_source_on_error`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk#operation-bulk-include_source_on_error) query parameter allows users to receive the source document in the error response if there were parsing errors in the bulk request. In the exporter, the equivalent configuration is also named `include_source_on_error`. | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.