-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add SDK span telemetry metrics #1631
Conversation
04f924f
to
8bbea82
Compare
Related #1580 |
Co-authored-by: Joao Grassi <[email protected]>
Would a |
@lzchen not in this PR, but I don't see why we wouldn't add something like this in the future. To me, this would fall in a This PR is current is about tracking data loss (+bonus of tracking the effective sampling rate). |
@lzchen Would http and gRPC instrumentation be good enough to solve this use-case? Or do you think having explicit additional metrics in the exporters is needed? |
For our use case in particular, tracking those things (request count, size and duration) are exactly what we need. Speaking separately though, would "duration" be a useful metric for exporters in general even for those that don't wind up using network requests?
I believe certain implementations (like Python) have made it so that instrumentations do not track calls made by the SDK (and thus, the exporter) itself. I think explicit metrics related to SDK components are needed in that regard. |
That makes sense for tracing (where it is easy to produce an infinite export loop) but, IMO, makes less sense for metrics where that kind of feedback loop doesn't exist. |
That's a good point. At least today unfortunately, all our instrumentations behave that way. Hypothetically, if we were to change this behavior, the instrumentations won't be able to differentiate between calls made from the SDK and ones made from the user's application correct? |
Yeah... People would need to use the |
A few points on duration:
So I think duration is the first and the most important choice. |
So IIUC you are referring to a part of what I would call "pipeline latency", the total time a span takes from being ended to being successfully exported. The metric you are envisioning would be the portion of this latency taken in the exporter, ignoring e.g. batching span processor delay.
My main concern here would be storage overhead. Histograms are much more expensive than counters, speaking of at least 10x with bad buckets, even more if you use exponential histograms or proper fine-granular buckets. This would make it hard to justify having the health metrics enabled by default. At the same time having the health metrics enabled by default gives the best out-of-the-box experience for users. At the same time I can't really see the general importance / usefulness of having the exporter durations: It feels like more of a nice to have. What conclusions does this metric allow you to draw? Do you have concrete examples? |
Since all other comments and discussions above are resolved and WDYT? @lmolkova @JonasKunz @dashpole @lzchen |
Pipeline latency is cool, but the moment you have it, you need to also have a way to break it down into pieces (exporting part, processor queue).
Debugging connectivity with my backend - network issues, throttling, slow backend response, retries, retry backoff interval optimizations. Counts are good, but they won't tell you that your P99 is 10 sec after all tries because your backoff interval is wrong. You'd just see less of them and will have no idea. I don't think it's nice to have. As a cost mitigation strategy, we can always use a small amount of buckets by default and users can always reconfigure them if they need less/more. |
Issue for follow-up discussions around adding duration: |
Changes
With this PR I'd like to start a discussion around adding SDK self-monitoring metrics to the semantic conventions.
The goal of these metrics is to give insights into how the SDK is performing, e.g. whether data is being dropped due to overload / misconfiguration or everything is healthy.
I'd like to add these to semconv to keep them language agnostic, so that for example a single dashboard can be used to visualize the health state of all SDKs used in a system.
We checked the SDK implementations, it seems like only the Java SDK currently has some health metrics implemented.
This PR took some inspiration from those and is intended to improve and therefore supersede them.
I'd like to start out with just span related metrics to keep the PR and discussions simpler here, but would follow up with similar PRs for logs and traces based on the discussion results on this PR.
Prior work
This PR can be seen as a follow up to the closed OTEP 259:
So we kind of have gone full circle: The discussion started with just SDK metrics (only for exporters), going to an approach to unify the metrics across SDK-exporters and collector, which then ended up with just collector metrics.
So this PR can be seen as the required revival of #184 (see also this comment).
In my opinion, it is a good thing to separate the collector and SDK self-metrics:
Existing Metrics in Java SDK
For reference, here is what the existing health metrics currently look like in the Java SDK:
Batch Span Processor metrics
queueSize
, value is the current size of the queuespanProcessorType
=BatchSpanProcessor
(there was a formerExecutorServiceSpanProcessor
which has been removed)BatchSpanProcessor
instances are usedprocessedSpans
, value is the number of spans submitted to the ProcessorspanProcessorType
=BatchSpanProcessor
dropped
(boolean
),true
for the number of spans which could not be processed due to a full queueThe SDK also implements pretty much the same metrics for the
BatchLogRecordProcessor
justspan
replaced everywhere withlog
Exporter metrics
Exporter metrics are the same for spans, metrics and logs. They are distinguishable based on a
type
attribute.Also the metric names are dependent on a "name" and "transport" defined by the exporter. For OTLP those are:
exporterName
=otlp
transport
is one ofgrpc
,http
(= protobuf) orhttp-json
The transport is used just for the instrumentation scope name:
io.opentelemetry.exporters.<exporterName>-<transport>
Based on that, the following metrics are exposed:
Counter
<exporterName>.exporter.seen
: The number of records (spans, metrics or logs) submitted to the exportertype
: one ofspan
,metric
orlog
Counter
<exporterName>.exporter.exported
: The number of records (spans, metrics or logs) actually exported (or failed)type
: one ofspan
,metric
orlog
success
(boolean):false
for exporter failuresMerge requirement checklist
[chore]