You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: troubleshoot/elasticsearch/high-cpu-usage.md
+24-16
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ If you're using {{ech}}, you can use AutoOps to monitor your cluster. AutoOps si
25
25
26
26
## Diagnose high CPU usage [diagnose-high-cpu-usage]
27
27
28
-
**Check CPU usage**
28
+
### Check CPU usage[check-cpu-usage]
29
29
30
30
You can check the CPU usage per node using the [cat nodes API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes):
31
31
@@ -60,7 +60,7 @@ To track CPU usage over time, we recommend enabling monitoring:
60
60
::::::
61
61
62
62
:::::::
63
-
**Check hot threads**
63
+
### Check hot threads[check-hot-threads]
64
64
65
65
If a node has high CPU usage, use the [nodes hot threads API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads) to check for resource-intensive threads running on the node.
66
66
@@ -75,16 +75,16 @@ This API returns a breakdown of any hot threads in plain text. High CPU usage fr
75
75
76
76
The following tips outline the most common causes of high CPU usage and their solutions.
High CPU usage is often caused by excessive JVM garbage collection (GC) activity. This excessive GC typically arises from configuration problems or inefficient queries causing increased heap memory usage.
81
81
82
82
For optimal JVM performance, garbage collection should meet these criteria:
83
83
84
-
* Young GC completes quickly, ideally within 50 milliseconds.
85
-
2. Young GC does not occur too frequently (approximately once every 10 seconds).
86
-
3. Old GC completes quickly (ideally within 1 second).
87
-
4. Old GC does not occur too frequently (once every 10 minutes or less frequently).
84
+
|GC Type | Completion Time | Occurrence Frequency |
Excessive JVM garbage collection usually indicates high heap memory usage. Common potential reasons for increased heap memory usage include:
90
90
@@ -95,33 +95,41 @@ Excessive JVM garbage collection usually indicates high heap memory usage. Commo
95
95
* Improper heap size configuration
96
96
* Misconfiguration of JVM new generation ratio (`-XX:NewRatio`)
97
97
98
-
**Hot spotting**
98
+
### Hot spotting[high-cpu-usage-hot-spotting]
99
99
100
-
You might experience high CPU usage on specific data nodes or an entire [data tier](/manage-data/lifecycle/data-tiers.md) if traffic isn’t evenly distributed. This is known as [hot spotting](hotspotting.md). Hot spotting commonly occurs when read or write applications don’t properly balance requests across nodes, or when indices receiving heavy write activity, such as indices in the hot tier, have their shards concentrated on just one or a few nodes.
100
+
You might experience high CPU usage on specific data nodes or an entire [data tier](/manage-data/lifecycle/data-tiers.md) if traffic isn’t evenly distributed. This is known as [hot spotting](hotspotting.md). Hot spotting commonly occurs when read or write applications don’t evenly distribute requests across nodes, or when indices receiving heavy write activity, such as indices in the hot tier, have their shards concentrated on just one or a few nodes.
101
101
102
102
For details on diagnosing and resolving these issues, refer to [](hotspotting.md).
103
103
104
-
**Oversharding**
104
+
### Oversharding[high-cpu-usage-oversharding]
105
105
106
-
If your Elasticsearch cluster contains a large number of shards, you might be facing an oversharding issue.
107
-
108
-
Oversharding occurs when there are too many shards, causing each shard to be smaller than optimal. While Elasticsearch doesn’t have a strict minimum shard size, an excessive number of small shards can negatively impact performance. Each shard consumes cluster resources since Elasticsearch must maintain metadata and manage shard states across all nodes.
106
+
Oversharding occurs when a cluster has too many shards, often times caused by shards being smaller than optimal. While Elasticsearch doesn’t have a strict minimum shard size, an excessive number of small shards can negatively impact performance. Each shard consumes cluster resources since Elasticsearch must maintain metadata and manage shard states across all nodes.
109
107
110
108
If you have too many small shards, you can address this by doing the following:
111
109
112
110
* Removing empty or unused indices.
113
111
* Deleting or closing indices containing outdated or unnecessary data.
114
112
* Reindexing smaller shards into fewer, larger shards to optimize cluster performance.
115
113
114
+
If your shards are sized correctly but you are still experiencing oversharding, creating a more aggressive [index lifecycle management strategy](/manage-data/lifecycle/index-lifecycle-management.md) or deleting old indices can help reduce the number of shards.
115
+
116
116
For more information, refer to [](/deploy-manage/production-guidance/optimize-performance/size-shards.md).
117
117
118
118
### Additional recommendations
119
119
120
120
To further reduce CPU load or mitigate temporary spikes in resource usage, consider these steps:
121
121
122
-
***Scale your cluster**: Heavy indexing and search loads can deplete smaller thread pools. Add nodes or upgrade existing ones to handle increased indexing and search loads more effectively.
123
-
***Spread out bulk requests**: Submit smaller [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk-1) or multi-search requests, and space them out to avoid overwhelming thread pools.
124
-
***Cancel long-running searches**: Regularly use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-tasks-list) to identify and cancel searches that consume excessive CPU time.
122
+
#### Scale your cluster [scale-your-cluster]
123
+
124
+
Heavy indexing and search loads can deplete smaller thread pools. Add nodes or upgrade existing ones to handle increased indexing and search loads more effectively.
125
+
126
+
#### Spread out bulk requests [spread-out-bulk-requests]
127
+
128
+
Submit smaller [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk-1) or multi-search requests, and space them out to avoid overwhelming thread pools.
Regularly use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-tasks-list) to identify and cancel searches that consume excessive CPU time.
0 commit comments