-
Notifications
You must be signed in to change notification settings - Fork 61
[DOCS] Enhance troubleshooting high cpu page. Opster migration #909
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
1aed359
a339bac
1cdee78
1a5a967
caa5dc1
2890ffb
1270365
449bd66
2634e95
7002d08
89e4274
6a396b9
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -17,8 +17,6 @@ mapped_pages: | |||||
|
||||||
If a thread pool is depleted, {{es}} will [reject requests](rejected-requests.md) related to the thread pool. For example, if the `search` thread pool is depleted, {{es}} will reject search requests until more threads are available. | ||||||
|
||||||
You might experience high CPU usage if a [data tier](../../manage-data/lifecycle/data-tiers.md), and therefore the nodes assigned to that tier, is experiencing more traffic than other tiers. This imbalance in resource utilization is also known as [hot spotting](hotspotting.md). | ||||||
|
||||||
::::{tip} | ||||||
If you're using {{ech}}, you can use AutoOps to monitor your cluster. AutoOps significantly simplifies cluster management with performance recommendations, resource utilization visibility, and real-time issue detection with resolution paths. For more information, refer to [](/deploy-manage/monitor/autoops.md). | ||||||
:::: | ||||||
|
@@ -27,7 +25,7 @@ If you're using {{ech}}, you can use AutoOps to monitor your cluster. AutoOps si | |||||
|
||||||
## Diagnose high CPU usage [diagnose-high-cpu-usage] | ||||||
|
||||||
**Check CPU usage** | ||||||
### Check CPU usage [check-cpu-usage] | ||||||
|
||||||
You can check the CPU usage per node using the [cat nodes API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes): | ||||||
|
||||||
|
@@ -62,7 +60,7 @@ To track CPU usage over time, we recommend enabling monitoring: | |||||
:::::: | ||||||
|
||||||
::::::: | ||||||
**Check hot threads** | ||||||
### Check hot threads [check-hot-threads] | ||||||
|
||||||
If a node has high CPU usage, use the [nodes hot threads API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads) to check for resource-intensive threads running on the node. | ||||||
|
||||||
|
@@ -77,17 +75,61 @@ This API returns a breakdown of any hot threads in plain text. High CPU usage fr | |||||
|
||||||
The following tips outline the most common causes of high CPU usage and their solutions. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. looking at this section, the first 3 items are not CPU usage reduction recommendations. what would you think about breaking the section into "Common causes of high CPU usage" and "Reduce high CPU usage" so one has links to additional problem spaces and one has general recommendations? |
||||||
|
||||||
**Scale your cluster** | ||||||
### Check JVM garbage collection [check-jvm-garbage-collection] | ||||||
|
||||||
High CPU usage is often caused by excessive JVM garbage collection (GC) activity. This excessive GC typically arises from configuration problems or inefficient queries causing increased heap memory usage. | ||||||
|
||||||
For optimal JVM performance, garbage collection should meet these criteria: | ||||||
|
||||||
| GC Type | Completion Time | Occurrence Frequency | | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. these should be sentence case
Suggested change
|
||||||
|---------|----------------|---------------------| | ||||||
| Young GC | <50ms | ~once per 10 seconds | | ||||||
| Old GC | <1s | ≤once per 10 minutes | | ||||||
|
||||||
Excessive JVM garbage collection usually indicates high heap memory usage. Common potential reasons for increased heap memory usage include: | ||||||
|
||||||
* Oversharding of indices | ||||||
* Very large aggregation queries | ||||||
* Excessively large bulk indexing requests | ||||||
* Inefficient or incorrect mapping definitions | ||||||
* Improper heap size configuration | ||||||
* Misconfiguration of JVM new generation ratio (`-XX:NewRatio`) | ||||||
|
||||||
### Hot spotting [high-cpu-usage-hot-spotting] | ||||||
|
||||||
You might experience high CPU usage on specific data nodes or an entire [data tier](/manage-data/lifecycle/data-tiers.md) if traffic isn’t evenly distributed. This is known as [hot spotting](hotspotting.md). Hot spotting commonly occurs when read or write applications don’t evenly distribute requests across nodes, or when indices receiving heavy write activity, such as indices in the hot tier, have their shards concentrated on just one or a few nodes. | ||||||
|
||||||
For details on diagnosing and resolving these issues, refer to [](hotspotting.md). | ||||||
|
||||||
### Oversharding [high-cpu-usage-oversharding] | ||||||
|
||||||
Oversharding occurs when a cluster has too many shards, often times caused by shards being smaller than optimal. While Elasticsearch doesn’t have a strict minimum shard size, an excessive number of small shards can negatively impact performance. Each shard consumes cluster resources since Elasticsearch must maintain metadata and manage shard states across all nodes. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
|
||||||
If you have too many small shards, you can address this by doing the following: | ||||||
|
||||||
* Removing empty or unused indices. | ||||||
* Deleting or closing indices containing outdated or unnecessary data. | ||||||
* Reindexing smaller shards into fewer, larger shards to optimize cluster performance. | ||||||
|
||||||
If your shards are sized correctly but you are still experiencing oversharding, creating a more aggressive [index lifecycle management strategy](/manage-data/lifecycle/index-lifecycle-management.md) or deleting old indices can help reduce the number of shards. | ||||||
|
||||||
For more information, refer to [](/deploy-manage/production-guidance/optimize-performance/size-shards.md). | ||||||
|
||||||
### Additional recommendations | ||||||
|
||||||
To further reduce CPU load or mitigate temporary spikes in resource usage, consider these steps: | ||||||
|
||||||
#### Scale your cluster [scale-your-cluster] | ||||||
|
||||||
Heavy indexing and search loads can deplete smaller thread pools. To better handle heavy workloads, add more nodes to your cluster or upgrade your existing nodes to increase capacity. | ||||||
Heavy indexing and search loads can deplete smaller thread pools. Add nodes or upgrade existing ones to handle increased indexing and search loads more effectively. | ||||||
|
||||||
**Spread out bulk requests** | ||||||
#### Spread out bulk requests [spread-out-bulk-requests] | ||||||
|
||||||
While more efficient than individual requests, large [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) or [multi-search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-msearch) requests still require CPU resources. If possible, submit smaller requests and allow more time between them. | ||||||
Submit smaller [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk-1) or multi-search requests, and space them out to avoid overwhelming thread pools. | ||||||
|
||||||
**Cancel long-running searches** | ||||||
#### Cancel long-running searches [cancel-long-running-searches] | ||||||
|
||||||
Long-running searches can block threads in the `search` thread pool. To check for these searches, use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-tasks). | ||||||
Regularly use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-tasks-list) to identify and cancel searches that consume excessive CPU time. | ||||||
|
||||||
```console | ||||||
GET _tasks?actions=*search&detailed | ||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shainaraskas , I made the changes to the headings, not sure if "check-cpu-usage" for example needs to be more unique to this page, or not.
Also i rewrote the oversharding paragraph to remove that first sentence but also realized I needed to clarify it a bit