You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After migrating from Grafana Agent to Grafana Alloy by upgrading Kubernetes monitoring from v1 to v2, I have encountered recurring errors from the Mimir Distributor warning about out-of-order samples in the target_info series:
ts=2025-02-25T14:46:20.230924388Z caller=push.go:130 level=error user=anonymous msg="push error" err="rpc error: code = Code(400) desc = failed pushing to ingester: user=anonymous: the sample has been rejected because another sample with a more recent timestamp has already been ingested and out-of-order samples are not allowed (err-mimir-sample-out-of-order). The affected sample has timestamp 2025-02-25T14:46:15.63Z and is from series {__name__=\"target_info\", cluster=\"pepe\"}"
Note that I am running a self-hosted Mimir setup with version 2.9.0.
These errors started appearing immediately after the migration, and they were not present before. Additionally, I noticed that the target_info metric was not collected prior to the migration.
Furthermore, I noticed that the scraped series indeed contains duplicated labels. Every minute, I receive approximately 50 errors concerning the same series with the same duplicated labels. As far as I know, the target_info metric should include differentiating labels such as job or instance, correct?
I am unable to determine why this metric started being collected after the migration or why the mentioned error occurs. It seems that the metric originates from the otelcol.exporter.prometheuscomponent and is controlled via the include_target_infosetting, which defaults to true.
However, it appears that disabling gathering of this metric via Helm values is not currently possible. I noticed recent changes related to fine-tuning exporter settings, such as this PR. Would it make sense to introduce a similar configuration option for this setting?
Additionally, do you have any insights into why ingestion of this metric might fail in my setup?
After migrating from Grafana Agent to Grafana Alloy by upgrading Kubernetes monitoring from
v1
tov2
, I have encountered recurring errors from the Mimir Distributor warning aboutout-of-order
samples in thetarget_info
series:Note that I am running a self-hosted Mimir setup with version
2.9.0
.These errors started appearing immediately after the migration, and they were not present before. Additionally, I noticed that the
target_info
metric was not collected prior to the migration.Furthermore, I noticed that the scraped series indeed contains duplicated labels. Every minute, I receive approximately
50
errors concerning the same series with the same duplicated labels. As far as I know, thetarget_info
metric should include differentiating labels such as job or instance, correct?I am unable to determine why this metric started being collected after the migration or why the mentioned error occurs. It seems that the metric originates from the
otelcol.exporter.prometheus
component and is controlled via theinclude_target_info
setting, which defaults totrue
.However, it appears that disabling gathering of this metric via Helm values is not currently possible. I noticed recent changes related to fine-tuning exporter settings, such as this PR. Would it make sense to introduce a similar configuration option for this setting?
Additionally, do you have any insights into why ingestion of this metric might fail in my setup?
Below is my current monitoring configuration:
Thanks in advance for your help!
The text was updated successfully, but these errors were encountered: