-
Notifications
You must be signed in to change notification settings - Fork 520
[Feature]: Allow precreation of AttributeSets for metrics #1387
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for opening this! Still trying to fully understand the value this brings... I think what this allows us is to avoid the cost of turning a slice of attributes into AttributeSet by reusing the Also, the bound instrument idea, could be used if one knows keys+values ahead of time, so this would most likely be not required at all!
|
So there are two parts to your response that I'll answer separately. The first is, is this needed after #1379 is merged? I think so. While #1379 does improve Outside of that, from an API point of view the current API encourages extra allocations that isn't needed. For example: pub fn do_stuff(name: String, tenant: String) {
// stuff that needs doing
counter.add(10, &[
KeyValue::new("username", name),
KeyValue::new("tenant", tenant),
KeyVAlue::new("operation", "stuff-done".to_string())
]);
} How many times are each of the strings cloned? While this specific example is taking advantage of the fact that the final This doesn't necessarily need to be fixed by passing in an An API that allows counter to take an Is this needed if bounded instruments is implemented? Maybe? There is definitely overlap between the two. Bounded instruments has the potential to take things further by more efficiently decentralizing aggregation of values (to reduce locking and hashmap lookups for example). I'd argue it's still possibly useful to have both. This specific change has less complexity to implement than bounded instruments (although while being a breaking change) while also providing an API that allows consumers to be in charge of allocations of short lived attributes. |
Related Problems?
In our application we manage counters that get incremented on a per-network packet basis. Opentelemetry metrics has a high performance cost for this, and even with PR #1379
AttributeSet::from()
ends up consuming 6.8% of our application's CPU time in one of our load tests. In most of these cases, the attributes' keys and values are stable and thus cacheable.Describe the solution you'd like:
Allowing the application calling opentelemetry's APIs to cache
AttributeSet
s and incur the costs only once per set of attributes can go a long way to improving performance.At a quick glance, we can either add a second
Counter.add()
method that takes anAttributeSet
or make a backward incompatible change whereCounter::add()
no longer takes a&[KeyValue]
and instead takes anAttributeSet
.My opinion is the latter is a better long term approach, as it encourages consumers to think about the cost of creating
AttributeSets
. Even if they are not in a scenario where they can cache the attributes (they aren't stable) this still gives the benefit of having less cloning required (today cloning is required since a slice is being passed in by reference, thus we can't consume theKeyValue
s to form theAttributeSet
.Note that this does require pulling
AttributeSet
out of theopentelemetry_sdk
crate and moving it to a higher level crate.Considered Alternatives
No response
Additional Context
No response
Note that I'm willing to do this work.
The text was updated successfully, but these errors were encountered: