You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The hash of `AttributeSet`s are expensive to compute, as they have to be
computed for each key and value in the attribute set. This hash is used
by the `ValueMap` to look up if we are already aggregating a time series
for this set of attributes or not. Since this hashmap lookup occurs
inside a mutex lock, no other counters can execute their `add()` calls
while this hash is being calculated, and therefore contention in high
throughput scenarios exists.
This PR calculates and caches the hashmap at creation time. This
improves throughput because the hashmap is calculated by the thread
creating the `AttributeSet` and is performed outside of any mutex locks,
meaning hashes can be computed in parallel and the time spent within a
mutex lock is reduced. As larger sets of attributes are used for time
series, the benefits of reduction of lock times should be greater.
The stress test results of this change for different thread counts are:
| Thread Count | Main | PR |
| -------------- | ---------- | --------- |
| 2 | 3,376,040 | 3,310,920 |
| 3 | 5,908,640 | 5,807,240 |
| 4 | 3,382,040 | 8,094,960 |
| 5 | 1,212,640 | 9,086,520 |
| 6 | 1,225,280 | 6,595,600 |
The non-precomputed hashes starts feeling contention with 4 threads, and
drops substantially after that while precomputed hashes doesn't start
seeing contention until 6 threads, and even then we still have 5-6x more
throughput after contention due to reduced locking times.
While these benchmarks may not be "realistic" (since most applications
will be doing more work in between counter updates) it does show a
benefit of better parallelism and the opportunity to reduce lock
contention at the cost of only 8 bytes per time series (so a total of
16KB additional memory at maximum cardinality).
0 commit comments