Replies: 2 comments
-
Related to #2430 |
Beta Was this translation helpful? Give feedback.
0 replies
-
I would also very much be interested in this for troubleshooting purposes: recently we've made changes to our istio configuration and now otel-collector memory usage is through the roof. I suspect something to do with istio grpc connection handling and the number of connections active on the collector. But I can't figure out how to prove this. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi!
We are currently doing some small tests on our open telemetry collector to see how the open telemetry collector behave in term of memory, cpu, latency, etc. while varying some parameters. ex span size, span rate, number of clients, etc ..
We are sending traces to the otlp receiver in gRPC and we observed that the more clients are connected to the receiver, the more memory it uses. This relation seams pretty linear. We also observed that it has a similar linear relation with the call latency. The more clients are connected, the longer it takes to answer.
Therefore, since we are deploying our collector on kubernetes, we though it could be an interesting idea to link our HorizontalPodAutoscaler (HPA) to the number of connections.
Is there a metric, or any ways, to access the otlp receiver gRPC number of connection?
We tried to increase the metrics verbosity to "Detailed", however, we didn't find this information.
Beta Was this translation helpful? Give feedback.
All reactions