Skip to content

Commit 4c038b9

Browse files
committed
WIP
1 parent 07263ab commit 4c038b9

File tree

9 files changed

+855
-844
lines changed

9 files changed

+855
-844
lines changed

Diff for: docs/AddressSpace.md

+3-5
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ We give here some notes on the internal orchestration.
88
Consider a first, "small" allocation (typically less than a platform page); such allocations showcase more of the machinery.
99
For simplicity, we assume that
1010

11-
TODO CoreAllocator rewrite here:
12-
1311
- this is not an `OPEN_ENCLAVE` build,
1412
- the `BackendAllocator` has not been told to use a `fixed_range`,
1513
- this is not a `SNMALLOC_CHECK_CLIENT` build, and
@@ -18,10 +16,10 @@ TODO CoreAllocator rewrite here:
1816
Since this is the first allocation, all the internal caches will be empty, and so we will hit all the slow paths.
1917
For simplicity, we gloss over much of the "lazy initialization" that would actually be implied by a first allocation.
2018

21-
1. The `LocalAlloc::small_alloc` finds that it cannot satisfy the request because its `LocalCache` lacks a free list for this size class.
22-
The request is delegated, unchanged, to `CoreAllocator::small_alloc`.
19+
1. The `Allocator::small_alloc` finds that it cannot satisfy the request because its lacks a fast free list for this size class.
20+
The request is delegated, unchanged, to `Allocator::small_refill`.
2321

24-
2. The `CoreAllocator` has no active slab for this sizeclass, so `CoreAllocator::small_alloc_slow` delegates to `BackendAllocator::alloc_chunk`.
22+
2. The `Allocator` has no active slab for this sizeclass, so `Allocator::small_refill_slow` delegates to `BackendAllocator::alloc_chunk`.
2523
At this point, the allocation request is enlarged to one or a few chunks (a small counting number multiple of `MIN_CHUNK_SIZE`, which is typically 16KiB); see `sizeclass_to_slab_size`.
2624

2725
3. `BackendAllocator::alloc_chunk` at this point splits the allocation request in two, allocating both the chunk's metadata structure (of size `PAGEMAP_METADATA_STRUCT_SIZE`) and the chunk itself (a multiple of `MIN_CHUNK_SIZE`).

Diff for: src/snmalloc/backend_helpers/commonconfig.h

+8-31
Original file line numberDiff line numberDiff line change
@@ -30,50 +30,27 @@ namespace snmalloc
3030
{
3131
/**
3232
* Should allocators have inline message queues? If this is true then
33-
* the `CoreAllocator` is responsible for allocating the
33+
* the `Allocator` is responsible for allocating the
3434
* `RemoteAllocator` that contains its message queue. If this is false
3535
* then the `RemoteAllocator` must be separately allocated and provided
36-
* to the `CoreAllocator` before it is used.
37-
*
38-
* Setting this to `false` currently requires also setting
39-
* `LocalAllocSupportsLazyInit` to false so that the `CoreAllocator` can
40-
* be provided to the `LocalAllocator` fully initialised but in the
41-
* future it may be possible to allocate the `RemoteAllocator` via
42-
* `alloc_meta_data` or a similar API in the back end.
36+
* to the `Allocator` before it is used.
4337
*/
4438
bool IsQueueInline = true;
4539

4640
/**
47-
* Does the `CoreAllocator` own a `Backend::LocalState` object? If this is
48-
* true then the `CoreAllocator` is responsible for allocating and
41+
* Does the `Allocator` own a `Backend::LocalState` object? If this is
42+
* true then the `Allocator` is responsible for allocating and
4943
* deallocating a local state object, otherwise the surrounding code is
5044
* responsible for creating it.
51-
*
52-
* Use cases that set this to false will probably also need to set
53-
* `LocalAllocSupportsLazyInit` to false so that they can provide the local
54-
* state explicitly during allocator creation.
5545
*/
56-
bool CoreAllocOwnsLocalState = true;
46+
bool AllocOwnsLocalState = true;
5747

5848
/**
59-
* Are `CoreAllocator` allocated by the pool allocator? If not then the
49+
* Are `Allocator` allocated by the pool allocator? If not then the
6050
* code embedding this snmalloc configuration is responsible for allocating
61-
* `CoreAllocator` instances.
62-
*
63-
* Users setting this flag must also set `LocalAllocSupportsLazyInit` to
64-
* false currently because there is no alternative mechanism for allocating
65-
* core allocators. This may change in future versions.
51+
* `Allocator` instances.
6652
*/
67-
bool CoreAllocIsPoolAllocated = true;
68-
69-
/**
70-
* Do `LocalAllocator` instances in this configuration support lazy
71-
* initialisation? If so, then the first exit from a fast path will
72-
* trigger allocation of a `CoreAllocator` and associated state. If not
73-
* then the code embedding this configuration of snmalloc is responsible
74-
* for allocating core allocators.
75-
*/
76-
bool LocalAllocSupportsLazyInit = true;
53+
bool AllocIsPoolAllocated = true;
7754

7855
/**
7956
* Are the front and back pointers to the message queue in a RemoteAllocator

Diff for: src/snmalloc/global/globalalloc.h

+3-3
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ namespace snmalloc
99
inline static void cleanup_unused()
1010
{
1111
static_assert(
12-
Config_::Options.CoreAllocIsPoolAllocated,
12+
Config_::Options.AllocIsPoolAllocated,
1313
"Global cleanup is available only for pool-allocated configurations");
1414
// Call this periodically to free and coalesce memory allocated by
1515
// allocators that are not currently in use by any thread.
@@ -41,7 +41,7 @@ namespace snmalloc
4141
inline static void debug_check_empty(bool* result = nullptr)
4242
{
4343
static_assert(
44-
Config_::Options.CoreAllocIsPoolAllocated,
44+
Config_::Options.AllocIsPoolAllocated,
4545
"Global status is available only for pool-allocated configurations");
4646
// This is a debugging function. It checks that all memory from all
4747
// allocators has been freed.
@@ -106,7 +106,7 @@ namespace snmalloc
106106
inline static void debug_in_use(size_t count)
107107
{
108108
static_assert(
109-
Config_::Options.CoreAllocIsPoolAllocated,
109+
Config_::Options.AllocIsPoolAllocated,
110110
"Global status is available only for pool-allocated configurations");
111111
auto alloc = AllocPool<Config_>::iterate();
112112
while (alloc != nullptr)

Diff for: src/snmalloc/global/threadalloc.h

+21-34
Original file line numberDiff line numberDiff line change
@@ -114,46 +114,33 @@ namespace snmalloc
114114
{
115115
bool post_teardown = teardown_called;
116116

117-
if constexpr (!Config::Options.LocalAllocSupportsLazyInit)
118-
{
119-
SNMALLOC_CHECK(
120-
false &&
121-
"lazy_init called on an allocator that doesn't support lazy "
122-
"initialisation");
123-
// Unreachable, but needed to keep the type checker happy in deducing
124-
// the return type of this function.
125-
return static_cast<decltype(action(args...))>(nullptr);
126-
}
127-
else
128-
{
129-
alloc = AllocPool<Config>::acquire();
117+
alloc = AllocPool<Config>::acquire();
130118

131-
// register_clean_up must be called after init. register clean up
132-
// may be implemented with allocation, so need to ensure we have a
133-
// valid allocator at this point.
134-
if (!post_teardown)
135-
{
136-
// Must be called at least once per thread.
137-
// A pthread implementation only calls the thread destruction handle
138-
// if the key has been set.
139-
Subclass::register_clean_up();
119+
// register_clean_up must be called after init. register clean up
120+
// may be implemented with allocation, so need to ensure we have a
121+
// valid allocator at this point.
122+
if (!post_teardown)
123+
{
124+
// Must be called at least once per thread.
125+
// A pthread implementation only calls the thread destruction handle
126+
// if the key has been set.
127+
Subclass::register_clean_up();
140128

141-
// Perform underlying operation
142-
return r(alloc, args...);
143-
}
129+
// Perform underlying operation
130+
return r(alloc, args...);
131+
}
144132

145-
OnDestruct od([]() {
133+
OnDestruct od([]() {
146134
# ifdef SNMALLOC_TRACING
147-
message<1024>("post_teardown flush()");
135+
message<1024>("post_teardown flush()");
148136
# endif
149-
// We didn't have an allocator because the thread is being torndown.
150-
// We need to return any local state, so we don't leak it.
151-
ThreadAlloc::teardown();
152-
});
137+
// We didn't have an allocator because the thread is being torndown.
138+
// We need to return any local state, so we don't leak it.
139+
ThreadAlloc::teardown();
140+
});
153141

154-
// Perform underlying operation
155-
return r(alloc, args...);
156-
}
142+
// Perform underlying operation
143+
return r(alloc, args...);
157144
}
158145

159146
public:

Diff for: src/snmalloc/mem/backend_concept.h

+2-2
Original file line numberDiff line numberDiff line change
@@ -164,14 +164,14 @@ namespace snmalloc
164164
} &&
165165
(
166166
requires() {
167-
Config::Options.CoreAllocIsPoolAllocated == true;
167+
Config::Options.AllocIsPoolAllocated == true;
168168
typename Config::GlobalPoolState;
169169
{
170170
Config::pool()
171171
} -> ConceptSame<typename Config::GlobalPoolState&>;
172172
} ||
173173
requires() {
174-
Config::Options.CoreAllocIsPoolAllocated == false;
174+
Config::Options.AllocIsPoolAllocated == false;
175175
});
176176

177177
/**

0 commit comments

Comments
 (0)