From adcabc5f9f69c21d53597add50095fdf5baed451 Mon Sep 17 00:00:00 2001 From: Rashmi Chandrashekar Date: Mon, 24 Feb 2025 10:06:12 -0800 Subject: [PATCH 1/8] hpa docs --- .../prometheus-metrics-scrape-autoscaling.md | 57 +++++++++++++++++++ articles/azure-monitor/toc.yml | 3 + 2 files changed, 60 insertions(+) create mode 100644 articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md diff --git a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md new file mode 100644 index 0000000000..33df749f39 --- /dev/null +++ b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md @@ -0,0 +1,57 @@ +--- +title: Autoscaling support for addon pods +description: Documentation regarding addon pod autoscaling support for Azure Managed Prometheus +ms.topic: conceptual +ms.date: 2/28/2024 +ms.reviewer: rashmy +--- + +# Managed Prometheus support for Horizontal Pod Autoscaling for Collector Replicaset + +### Overview +The Managed Prometheus Addon now supports Horizontal Pod Autoscaling(HPA) for the ama-metrics replicaset pod. +With this, the ama-metrics replicaset pod which handles the scraping of prometheus metrics with custom jobs can scale automatically based on the memory utilization. By default, the HPA is configured with a minimum of 2 replicas (which is our global default) and a maximum of 12 replicas. The customers will also the have capability to set the shards to any number of minimum and maximum repliacas as long as they are within the range of 2 and 12. +With this, HPA automatically takes care of scaling based on the memory utlization of the ama-metrics pod to avoid OOMKills along with providing customer flexibility. + +[Kubernetes support for HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) + +[HPA Deployment Spec for Managed Prometheus Addon](../../otelcollector/deploy/addon-chart/azure-monitor-metrics-addon/templates/ama-metrics-collector-hpa.yaml) + +### Update Min and Max shards +In order to update the min and max shards on the HPA, the HPA object **ama-metrics-hpa** in the kube-system namespace can be edited and it will not be reconciled as long as it is within the supported range of 2 and 12. + +**Update Min replicas** +```bash +kubectl patch hpa ama-metrics-hpa -n kube-system --type merge --patch '{"spec": {"minReplicas": 4}}' +``` + +**Update Max replicas** +```bash +kubectl patch hpa ama-metrics-hpa -n kube-system --type merge --patch '{"spec": {"maxReplicas": 10}}' +``` + +**Update Min and Max replicas** +```bash +kubectl patch hpa ama-metrics-hpa -n kube-system --type merge --patch '{"spec": {"minReplicas": 3, "maxReplicas": 10}}' +``` + +**or** + +The min and max replicas can also be edited by doing a **kubectl edit** and updating the spec in the editor +```bash +kubectl edit hpa ama-metrics-hpa -n kube-system +``` + +### Update min and max shards to disable HPA scaling +HPA should be able to handle the load automatically for varying customer needs. But, it it doesnt fit the needs, the customer can set min shards = max shards so that HPA doesnt scale the replicas based on the varying loads. + +Ex - If the customer wants to set the shards to 8 and not have the HPA update the shards, update the min and max shards to 8. + +**Update Min and Max replicas** +```bash +kubectl patch hpa ama-metrics-hpa -n kube-system --type merge --patch '{"spec": {"minReplicas": 8, "maxReplicas": 8}}' +``` + +## Next steps + +* [Troubleshoot issues with Prometheus data collection](prometheus-metrics-troubleshoot.md). diff --git a/articles/azure-monitor/toc.yml b/articles/azure-monitor/toc.yml index 644b9bb8a3..0cd1c1a2d4 100644 --- a/articles/azure-monitor/toc.yml +++ b/articles/azure-monitor/toc.yml @@ -494,6 +494,9 @@ items: - name: High scale displayName: Prometheus href: containers/prometheus-metrics-scrape-scale.md + - name: Autoscale support for Managed Prometheus Addon + displayName: Prometheus + href: containers/prometheus-metrics-scrape-autoscaling.md - name: Integrations items: - name: Set up exporters for common workloads From a3c351e7118a737e97e3179d28bb7bf017f38858 Mon Sep 17 00:00:00 2001 From: Rashmi Chandrashekar Date: Mon, 24 Feb 2025 10:11:37 -0800 Subject: [PATCH 2/8] fix link --- .../containers/prometheus-metrics-scrape-autoscaling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md index 33df749f39..6ffbd33d28 100644 --- a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md +++ b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md @@ -15,7 +15,7 @@ With this, HPA automatically takes care of scaling based on the memory utlizatio [Kubernetes support for HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) -[HPA Deployment Spec for Managed Prometheus Addon](../../otelcollector/deploy/addon-chart/azure-monitor-metrics-addon/templates/ama-metrics-collector-hpa.yaml) +[HPA Deployment Spec for Managed Prometheus Addon](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/deploy/addon-chart/azure-monitor-metrics-addon/templates/ama-metrics-collector-hpa.yaml) ### Update Min and Max shards In order to update the min and max shards on the HPA, the HPA object **ama-metrics-hpa** in the kube-system namespace can be edited and it will not be reconciled as long as it is within the supported range of 2 and 12. From 446fc4a5ebc23d8bbd06b75834ce297c010927d9 Mon Sep 17 00:00:00 2001 From: Rashmi Chandrashekar Date: Mon, 24 Feb 2025 10:20:13 -0800 Subject: [PATCH 3/8] fixes --- .../containers/prometheus-metrics-scrape-autoscaling.md | 2 +- articles/azure-monitor/toc.yml | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md index 6ffbd33d28..479cbfdbec 100644 --- a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md +++ b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md @@ -6,7 +6,7 @@ ms.date: 2/28/2024 ms.reviewer: rashmy --- -# Managed Prometheus support for Horizontal Pod Autoscaling for Collector Replicaset +# Horizontal Pod Autoscaling for Collector Replicaset ### Overview The Managed Prometheus Addon now supports Horizontal Pod Autoscaling(HPA) for the ama-metrics replicaset pod. diff --git a/articles/azure-monitor/toc.yml b/articles/azure-monitor/toc.yml index 0cd1c1a2d4..d20eb81c32 100644 --- a/articles/azure-monitor/toc.yml +++ b/articles/azure-monitor/toc.yml @@ -494,7 +494,7 @@ items: - name: High scale displayName: Prometheus href: containers/prometheus-metrics-scrape-scale.md - - name: Autoscale support for Managed Prometheus Addon + - name: Autoscale support displayName: Prometheus href: containers/prometheus-metrics-scrape-autoscaling.md - name: Integrations From b84125ce9371eb09ac93b9c46810550b996b958d Mon Sep 17 00:00:00 2001 From: Rashmi Chandrashekar Date: Mon, 24 Feb 2025 10:45:01 -0800 Subject: [PATCH 4/8] changes --- .../containers/prometheus-metrics-scrape-autoscaling.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md index 479cbfdbec..e4f3fa3e16 100644 --- a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md +++ b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md @@ -10,15 +10,15 @@ ms.reviewer: rashmy ### Overview The Managed Prometheus Addon now supports Horizontal Pod Autoscaling(HPA) for the ama-metrics replicaset pod. -With this, the ama-metrics replicaset pod which handles the scraping of prometheus metrics with custom jobs can scale automatically based on the memory utilization. By default, the HPA is configured with a minimum of 2 replicas (which is our global default) and a maximum of 12 replicas. The customers will also the have capability to set the shards to any number of minimum and maximum repliacas as long as they are within the range of 2 and 12. -With this, HPA automatically takes care of scaling based on the memory utlization of the ama-metrics pod to avoid OOMKills along with providing customer flexibility. +The HPA allows the ama-metrics replicaset pod, which scrapes Prometheus metrics with custom jobs, to scale automatically based on memory utilization to prevent OOMKills. By default, the HPA is configured with a minimum of 2 replicas and a maximum of 12 replicas. Users can configure the number of shards within the range of 2 to 12 replicas. [Kubernetes support for HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) [HPA Deployment Spec for Managed Prometheus Addon](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/deploy/addon-chart/azure-monitor-metrics-addon/templates/ama-metrics-collector-hpa.yaml) ### Update Min and Max shards -In order to update the min and max shards on the HPA, the HPA object **ama-metrics-hpa** in the kube-system namespace can be edited and it will not be reconciled as long as it is within the supported range of 2 and 12. +The HPA object named **ama-metrics-hpa** in the kube-system namespace can be edited to update the min and max shards/replicaset instances. +Note that the changes will not be reconciled as long as they remain within the supported range of 2 to 12. **Update Min replicas** ```bash @@ -43,7 +43,8 @@ kubectl edit hpa ama-metrics-hpa -n kube-system ``` ### Update min and max shards to disable HPA scaling -HPA should be able to handle the load automatically for varying customer needs. But, it it doesnt fit the needs, the customer can set min shards = max shards so that HPA doesnt scale the replicas based on the varying loads. +If the default HPA settings do not meet the customer's requirements, they can configure the minimum and maximum number of shards to be the same. +This prevents the HPA from scaling the replicas based on varying loads, ensuring a consistent number of replicas. Ex - If the customer wants to set the shards to 8 and not have the HPA update the shards, update the min and max shards to 8. From 6b05be64d12e5d377f5675774f44706f4b189261 Mon Sep 17 00:00:00 2001 From: Rashmi Chandrashekar Date: Mon, 24 Feb 2025 11:36:25 -0800 Subject: [PATCH 5/8] pr comments --- .../containers/prometheus-metrics-scrape-autoscaling.md | 4 ++-- .../containers/prometheus-metrics-scrape-scale.md | 6 +++--- .../containers/prometheus-metrics-troubleshoot.md | 4 +++- .../azure-monitor/essentials/prometheus-metrics-overview.md | 4 ++++ 4 files changed, 12 insertions(+), 6 deletions(-) diff --git a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md index e4f3fa3e16..42ec937879 100644 --- a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md +++ b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md @@ -9,7 +9,7 @@ ms.reviewer: rashmy # Horizontal Pod Autoscaling for Collector Replicaset ### Overview -The Managed Prometheus Addon now supports Horizontal Pod Autoscaling(HPA) for the ama-metrics replicaset pod. +Azure Managed Prometheus supports Horizontal Pod Autoscaling(HPA) for the ama-metrics replicaset pod by default. The HPA allows the ama-metrics replicaset pod, which scrapes Prometheus metrics with custom jobs, to scale automatically based on memory utilization to prevent OOMKills. By default, the HPA is configured with a minimum of 2 replicas and a maximum of 12 replicas. Users can configure the number of shards within the range of 2 to 12 replicas. [Kubernetes support for HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) @@ -18,7 +18,7 @@ The HPA allows the ama-metrics replicaset pod, which scrapes Prometheus metrics ### Update Min and Max shards The HPA object named **ama-metrics-hpa** in the kube-system namespace can be edited to update the min and max shards/replicaset instances. -Note that the changes will not be reconciled as long as they remain within the supported range of 2 to 12. +Note that if the changes are not within the supported range of 2 to 12 they will be ineffective and fall back to the last known good. **Update Min replicas** ```bash diff --git a/articles/azure-monitor/containers/prometheus-metrics-scrape-scale.md b/articles/azure-monitor/containers/prometheus-metrics-scrape-scale.md index 5e348c679f..f6c7d8cb08 100644 --- a/articles/azure-monitor/containers/prometheus-metrics-scrape-scale.md +++ b/articles/azure-monitor/containers/prometheus-metrics-scrape-scale.md @@ -14,9 +14,9 @@ This article provides guidance on performance that can be expected when collecti The CPU and memory usage is correlated with the number of bytes of each sample and the number of samples scraped. These benchmarks are based on the [default targets scraped](prometheus-metrics-scrape-default.md), volume of custom metrics scraped, and number of nodes, pods, and containers. These numbers are meant as a reference since usage can still vary significantly depending on the number of time series and bytes per metric. -The upper volume limit per pod is currently about 3-3.5 million samples per minute, depending on the number of bytes per sample. This limitation is addressed when sharding is added in future. +The upper volume limit per pod is currently about 3-3.5 million samples per minute, depending on the number of bytes per sample. -The agent consists of a deployment with one replica and DaemonSet for scraping metrics. The DaemonSet scrapes any node-level targets such as cAdvisor, kubelet, and node exporter. You can also configure it to scrape any custom targets at the node level with static configs. The replica set scrapes everything else such as kube-state-metrics or custom scrape jobs that utilize service discovery. +The agent consists of a deployment with two replicas by default (which will be automatically configured by HPA based on memory utilization) and DaemonSet for scraping metrics. The DaemonSet scrapes any node-level targets such as cAdvisor, kubelet, and node exporter. You can also configure it to scrape any custom targets at the node level with static configs. The replica set scrapes everything else such as kube-state-metrics or custom scrape jobs that utilize service discovery. ## Comparison between small and large cluster for replica @@ -38,7 +38,7 @@ For more custom metrics, the single pod behaves the same as the replica pod depe ## Schedule ama-metrics replica pod on a node pool with more resources -A large volume of metrics per pod needs a node with enough CPU and memory. If the *ama-metrics* replica pod isn't scheduled on a node or node pool with enough resources, it might get OOMKilled and go into CrashLoopBackoff. To fix this, you can add the label `azuremonitor/metrics.replica.preferred=true` to a node or node pool on your cluster with higher resources (in [system node pool](/azure/aks/use-system-pools#system-and-user-node-pools)). This ensures the replica pod gets scheduled on that node. You can also create extra system pools with larger nodes and add the same label. It's better to label node pools rather than individual nodes so new nodes in the pool can also be used for scheduling. +A large volume of metrics per pod needs a node with enough CPU and memory. If the *ama-metrics* replica pods aren't scheduled on nodes or node pools with enough resources, they might get OOMKilled and go into CrashLoopBackoff. To fix this, you can add the label `azuremonitor/metrics.replica.preferred=true` to nodes or node pools on your cluster with higher resources (in [system node pool](/azure/aks/use-system-pools#system-and-user-node-pools)). This ensures the replica pods get scheduled on those nodes. You can also create extra system pools with larger nodes and add the same label. It's better to label node pools rather than individual nodes so new nodes in the pool can also be used for scheduling. ``` kubectl label nodes azuremonitor/metrics.replica.preferred="true" diff --git a/articles/azure-monitor/containers/prometheus-metrics-troubleshoot.md b/articles/azure-monitor/containers/prometheus-metrics-troubleshoot.md index e214e32d82..c9626684d6 100644 --- a/articles/azure-monitor/containers/prometheus-metrics-troubleshoot.md +++ b/articles/azure-monitor/containers/prometheus-metrics-troubleshoot.md @@ -197,6 +197,8 @@ If creation of Azure Monitor Workspace fails with an error saying "*Resource 're When you create an Azure Monitor workspace, by default a data collection rule and a data collection endpoint in the form "*azure-monitor-workspace-name*" will automatically be created in a resource group in the form "*MA_azure-monitor-workspace-name_location_managed*". Currently there's no way to change the names of these resources, and you'll need to set an exemption on the Azure Policy to exempt the above resources from policy evaluation. See [Azure Policy exemption structure](/azure/governance/policy/concepts/exemption-structure). -## Next steps +## High Scale considerations +If you are collecting metrics at high scale, check the sections below for HPA and high scale guidance. - [Check considerations for collecting metrics at high scale](prometheus-metrics-scrape-scale.md). +- [Horizontal Pod Autoscaling for collector replicaset](prometheus-metrics-scrape-autoscaling.md) \ No newline at end of file diff --git a/articles/azure-monitor/essentials/prometheus-metrics-overview.md b/articles/azure-monitor/essentials/prometheus-metrics-overview.md index 24fae99a99..2136b0239e 100644 --- a/articles/azure-monitor/essentials/prometheus-metrics-overview.md +++ b/articles/azure-monitor/essentials/prometheus-metrics-overview.md @@ -29,6 +29,10 @@ Azure Monitor managed service for Prometheus provides preconfigured alerts, rule Pricing is based on ingestion and query with no additional storage cost. For more information, see the **Metrics** tab in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). +> [!NOTE] +> Azure Managed Prometheus supports Horizontal Pod Autoscaling for replicaset pods in AKS and Arc-enabled Kubernetes clusters. +See [Autoscaling](../containers/prometheus-metrics-scrape-autoscaling.md) to learn more. + ### Enable Azure Monitor managed service for Prometheus Azure Monitor managed service for Prometheus collects data from AKS and Azure Arc-enabled Kubernetes. From b59ac914678ef863b426854e02ace98e82453758 Mon Sep 17 00:00:00 2001 From: Rashmi Chandrashekar Date: Mon, 24 Feb 2025 11:53:28 -0800 Subject: [PATCH 6/8] more addressing --- .../containers/prometheus-metrics-scrape-autoscaling.md | 4 ++-- .../azure-monitor/essentials/prometheus-metrics-overview.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md index 42ec937879..9a7a5c6f94 100644 --- a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md +++ b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md @@ -14,8 +14,6 @@ The HPA allows the ama-metrics replicaset pod, which scrapes Prometheus metrics [Kubernetes support for HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) -[HPA Deployment Spec for Managed Prometheus Addon](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/deploy/addon-chart/azure-monitor-metrics-addon/templates/ama-metrics-collector-hpa.yaml) - ### Update Min and Max shards The HPA object named **ama-metrics-hpa** in the kube-system namespace can be edited to update the min and max shards/replicaset instances. Note that if the changes are not within the supported range of 2 to 12 they will be ineffective and fall back to the last known good. @@ -53,6 +51,8 @@ Ex - If the customer wants to set the shards to 8 and not have the HPA update th kubectl patch hpa ama-metrics-hpa -n kube-system --type merge --patch '{"spec": {"minReplicas": 8, "maxReplicas": 8}}' ``` +*A kubectl edit on the ama-metrics-hpa spec gives more information about the scale up and scale down configurations used for HPA* + ## Next steps * [Troubleshoot issues with Prometheus data collection](prometheus-metrics-troubleshoot.md). diff --git a/articles/azure-monitor/essentials/prometheus-metrics-overview.md b/articles/azure-monitor/essentials/prometheus-metrics-overview.md index 2136b0239e..7f694b02a2 100644 --- a/articles/azure-monitor/essentials/prometheus-metrics-overview.md +++ b/articles/azure-monitor/essentials/prometheus-metrics-overview.md @@ -30,7 +30,7 @@ Azure Monitor managed service for Prometheus provides preconfigured alerts, rule Pricing is based on ingestion and query with no additional storage cost. For more information, see the **Metrics** tab in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). > [!NOTE] -> Azure Managed Prometheus supports Horizontal Pod Autoscaling for replicaset pods in AKS and Arc-enabled Kubernetes clusters. +> Azure Managed Prometheus supports Horizontal Pod Autoscaling for replicaset pods in AKS Kubernetes clusters. See [Autoscaling](../containers/prometheus-metrics-scrape-autoscaling.md) to learn more. ### Enable Azure Monitor managed service for Prometheus From d18d1563eee4b75e78246a758fab5d04fc18aa0d Mon Sep 17 00:00:00 2001 From: Rashmi Chandrashekar Date: Wed, 26 Feb 2025 10:53:20 -0800 Subject: [PATCH 7/8] fixing comments --- .../prometheus-metrics-scrape-autoscaling.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md index 9a7a5c6f94..7018bb192b 100644 --- a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md +++ b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md @@ -1,22 +1,22 @@ --- -title: Autoscaling support for addon pods -description: Documentation regarding addon pod autoscaling support for Azure Managed Prometheus +title: Autoscaling Support for Addon Pods +description: Documentation regarding addon pod autoscaling support for Azure Managed Prometheus. ms.topic: conceptual ms.date: 2/28/2024 ms.reviewer: rashmy --- -# Horizontal Pod Autoscaling for Collector Replicaset +# Horizontal Pod Autoscaling(HPA) for Collector Replica set ### Overview -Azure Managed Prometheus supports Horizontal Pod Autoscaling(HPA) for the ama-metrics replicaset pod by default. -The HPA allows the ama-metrics replicaset pod, which scrapes Prometheus metrics with custom jobs, to scale automatically based on memory utilization to prevent OOMKills. By default, the HPA is configured with a minimum of 2 replicas and a maximum of 12 replicas. Users can configure the number of shards within the range of 2 to 12 replicas. +Azure Managed Prometheus supports Horizontal Pod Autoscaling(HPA) for the ama-metrics replica set pod by default. +The HPA allows the ama-metrics replica set pod, which scrapes Prometheus metrics with custom jobs, to scale automatically based on memory utilization to prevent OOMKills. By default, the HPA is configured with a minimum of two replicas and a maximum of 12 replicas. Users can configure the number of shards within the range of 2 to 12 replicas. [Kubernetes support for HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) ### Update Min and Max shards -The HPA object named **ama-metrics-hpa** in the kube-system namespace can be edited to update the min and max shards/replicaset instances. -Note that if the changes are not within the supported range of 2 to 12 they will be ineffective and fall back to the last known good. +The HPA object named **ama-metrics-hpa** in the kube-system namespace can be edited to update the min and max shards/replica set instances. +If the changes aren't within the supported range of 2 to 12 they are ineffective and fall back to the last known good. **Update Min replicas** ```bash @@ -41,7 +41,7 @@ kubectl edit hpa ama-metrics-hpa -n kube-system ``` ### Update min and max shards to disable HPA scaling -If the default HPA settings do not meet the customer's requirements, they can configure the minimum and maximum number of shards to be the same. +If the default HPA settings don't meet the customer's requirements, they can configure the minimum and maximum number of shards to be the same. This prevents the HPA from scaling the replicas based on varying loads, ensuring a consistent number of replicas. Ex - If the customer wants to set the shards to 8 and not have the HPA update the shards, update the min and max shards to 8. From ca3b703d3540d302cd3d40ad2b6ca85c3f3395b2 Mon Sep 17 00:00:00 2001 From: Dennis Rea Date: Fri, 28 Feb 2025 10:06:20 -0800 Subject: [PATCH 8/8] Fix spacing in HPA section header --- .../containers/prometheus-metrics-scrape-autoscaling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md index 7018bb192b..509553a9c9 100644 --- a/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md +++ b/articles/azure-monitor/containers/prometheus-metrics-scrape-autoscaling.md @@ -6,7 +6,7 @@ ms.date: 2/28/2024 ms.reviewer: rashmy --- -# Horizontal Pod Autoscaling(HPA) for Collector Replica set +# Horizontal Pod Autoscaling (HPA) for Collector Replica set ### Overview Azure Managed Prometheus supports Horizontal Pod Autoscaling(HPA) for the ama-metrics replica set pod by default.