Skip to content

Commit 4e734e6

Browse files
committed
Revert "Free all slabs on region reset"
This reverts commit 67d7ab4. The goal of the reverted commit was to fix flaky fails of tarantool tests that checks amount of memory used by a fiber: | fiber.info()[fiber.self().id()].memory.used It also attempts to overcome the situation when a fiber holds some amount of memory, which is not used in any way. The high limit of such memory is controlled by a threshold in fiber_gc() tarantool's function (128 KiB at the moment): | void | fiber_gc(void) | { | if (region_used(&fiber()->gc) < 128 * 1024) { | region_reset(&fiber()->gc); | return; | } | | region_free(&fiber()->gc); | } The reverted commit, however, leads to significant performance degradation on certain workloads (see #4736). So the revertion fixes the performance degradation and opens the problem with tests, which is tracked in #4750. Related to #12 Related to tarantool/tarantool#4750 Fixes tarantool/tarantool#4736
1 parent 50cb787 commit 4e734e6

File tree

1 file changed

+4
-14
lines changed

1 file changed

+4
-14
lines changed

small/region.h

+4-14
Original file line numberDiff line numberDiff line change
@@ -156,16 +156,6 @@ region_reserve(struct region *region, size_t size)
156156
slab.next_in_list);
157157
if (size <= rslab_unused(slab))
158158
return (char *) rslab_data(slab) + slab->used;
159-
/* Try to get a slab from the region cache. */
160-
slab = rlist_last_entry(&region->slabs.slabs,
161-
struct rslab,
162-
slab.next_in_list);
163-
if (slab->used == 0 && size <= rslab_unused(slab)) {
164-
/* Move this slab to the head. */
165-
slab_list_del(&region->slabs, &slab->slab, next_in_list);
166-
slab_list_add(&region->slabs, &slab->slab, next_in_list);
167-
return (char *) rslab_data(slab);
168-
}
169159
}
170160
return region_reserve_slow(region, size);
171161
}
@@ -222,14 +212,14 @@ region_aligned_alloc(struct region *region, size_t size, size_t alignment)
222212

223213
/**
224214
* Mark region as empty, but keep the blocks.
225-
* Do not change the first slab and use previous slabs as a cache to
226-
* use for future allocations.
227215
*/
228216
static inline void
229217
region_reset(struct region *region)
230218
{
231-
struct rslab *slab;
232-
rlist_foreach_entry(slab, &region->slabs.slabs, slab.next_in_list) {
219+
if (! rlist_empty(&region->slabs.slabs)) {
220+
struct rslab *slab = rlist_first_entry(&region->slabs.slabs,
221+
struct rslab,
222+
slab.next_in_list);
233223
region->slabs.stats.used -= slab->used;
234224
slab->used = 0;
235225
}

0 commit comments

Comments
 (0)