Description
Related Problems?
In our application we manage counters that get incremented on a per-network packet basis. Opentelemetry metrics has a high performance cost for this, and even with PR #1379 AttributeSet::from()
ends up consuming 6.8% of our application's CPU time in one of our load tests. In most of these cases, the attributes' keys and values are stable and thus cacheable.
Describe the solution you'd like:
Allowing the application calling opentelemetry's APIs to cache AttributeSet
s and incur the costs only once per set of attributes can go a long way to improving performance.
At a quick glance, we can either add a second Counter.add()
method that takes an AttributeSet
or make a backward incompatible change where Counter::add()
no longer takes a &[KeyValue]
and instead takes an AttributeSet
.
My opinion is the latter is a better long term approach, as it encourages consumers to think about the cost of creating AttributeSets
. Even if they are not in a scenario where they can cache the attributes (they aren't stable) this still gives the benefit of having less cloning required (today cloning is required since a slice is being passed in by reference, thus we can't consume the KeyValue
s to form the AttributeSet
.
Note that this does require pulling AttributeSet
out of the opentelemetry_sdk
crate and moving it to a higher level crate.
Considered Alternatives
No response
Additional Context
No response
Note that I'm willing to do this work.