Skip to content

Conversation

Aathish101
Copy link

No description provided.

@geky-bot
Copy link
Collaborator

geky-bot commented Oct 9, 2025

Tests passed ✓, Code: 17128 B (+0.0%), Stack: 1448 B (+0.0%), Structs: 812 B (+0.0%)
Code Stack Structs Coverage
Default 17128 B (+0.0%) 1448 B (+0.0%) 812 B (+0.0%) Lines 2438/2599 lines (-0.0%)
Readonly 6234 B (+0.0%) 448 B (+0.0%) 812 B (+0.0%) Branches 1288/1624 branches (-0.0%)
Threadsafe 17980 B (+0.0%) 1448 B (+0.0%) 820 B (+0.0%) Benchmarks
Multiversion 17200 B (+0.0%) 1448 B (+0.0%) 816 B (+0.0%) Readed 29000746676 B (+0.0%)
Migrate 18792 B (+0.0%) 1752 B (+0.0%) 816 B (+0.0%) Proged 1482895246 B (+0.0%)
Error-asserts 17952 B (+0.0%) 1440 B (+0.0%) 812 B (+0.0%) Erased 1568921600 B (+0.0%)

Copy link

@BenBE BenBE left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't look right. Most of the changes are deletions, that don't even align with where they would make sense Also, removing the diagrams makes the document lose its soul, as most of the explanations were properly accompanied by the diagrams making things much easier to digest. Also: Why are you removing the blank lines between paragraphs? What's even the goal of this PR?


```
| | | .---._____
.---._____
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this change?

@@ -1,460 +1,60 @@
## The design of littlefs

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why remove all those blank lines?

'----------------'----------------' '----------------'----------------'
```

3. If our block is full of entries _and_ we can't find any garbage, then what?
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is removing large chunks across the middle of differing sections intentional?


3. If our block is full of entries _and_ we can't find any garbage, then what?
At this point, most logging filesystems would return an error indicating no
@@ -443,89 +443,94 @@
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left-over diff marker.

entry, ![d] dynamic entries (entries that are outdated during garbage
collection), and ![s] static entries (entries that need to be copied during
Looking at the problem generically, consider a log with `n` bytes for each
entry, `d`dynamic entries (entries that are outdated during garbage
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whitespace

Suggested change
entry, `d`dynamic entries (entries that are outdated during garbage
entry, `d` dynamic entries (entries that are outdated during garbage

Comment on lines -772 to -2123
us store CTZ skip-lists with only a pointer and size.

CTZ skip-lists give us a COW data structure that is easily traversable in
_O(n)_, can be appended in _O(1)_, and can be read in _O(n log n)_. All of
these operations work in a bounded amount of RAM and require only two words of
storage overhead per block. In combination with metadata pairs, CTZ skip-lists
provide power resilience and compact storage of data.

```
.--------.
.|metadata|
|| |
|| |
|'--------'
'----|---'
v
.--------. .--------. .--------. .--------.
| data 0 |<-| data 1 |<-| data 2 |<-| data 3 |
| |<-| |--| | | |
| | | | | | | |
'--------' '--------' '--------' '--------'
write data to disk, create copies
=>
.--------.
.|metadata|
|| |
|| |
|'--------'
'----|---'
v
.--------. .--------. .--------. .--------.
| data 0 |<-| data 1 |<-| data 2 |<-| data 3 |
| |<-| |--| | | |
| | | | | | | |
'--------' '--------' '--------' '--------'
^ ^ ^
| | | .--------. .--------. .--------. .--------.
| | '----| new |<-| new |<-| new |<-| new |
| '----------------| data 2 |<-| data 3 |--| data 4 | | data 5 |
'------------------| |--| |--| | | |
'--------' '--------' '--------' '--------'
commit to metadata pair
=>
.--------.
.|new |
||metadata|
|| |
|'--------'
'----|---'
|
.--------. .--------. .--------. .--------. |
| data 0 |<-| data 1 |<-| data 2 |<-| data 3 | |
| |<-| |--| | | | |
| | | | | | | | |
'--------' '--------' '--------' '--------' |
^ ^ ^ v
| | | .--------. .--------. .--------. .--------.
| | '----| new |<-| new |<-| new |<-| new |
| '----------------| data 2 |<-| data 3 |--| data 4 | | data 5 |
'------------------| |--| |--| | | |
'--------' '--------' '--------' '--------'
```

## The block allocator

So we now have the framework for an atomic, wear leveling filesystem. Small two
block metadata pairs provide atomic updates, while CTZ skip-lists provide
compact storage of data in COW blocks.

But now we need to look at the [elephant] in the room. Where do all these
blocks come from?

Deciding which block to use next is the responsibility of the block allocator.
In filesystem design, block allocation is often a second-class citizen, but in
a COW filesystem its role becomes much more important as it is needed for
nearly every write to the filesystem.

Normally, block allocation involves some sort of free list or bitmap stored on
the filesystem that is updated with free blocks. However, with power
resilience, keeping these structures consistent becomes difficult. It doesn't
help that any mistake in updating these structures can result in lost blocks
that are impossible to recover.

littlefs takes a cautious approach. Instead of trusting a free list on disk,
littlefs relies on the fact that the filesystem on disk is a mirror image of
the free blocks on the disk. The block allocator operates much like a garbage
collector in a scripting language, scanning for unused blocks on demand.

```
.----.
|root|
| |
'----'
v-------' '-------v
.----. . . .----.
| A | . . | B |
| | . . | |
'----' . . '----'
. . . . v--' '------------v---------v
. . . .----. . .----. .----.
. . . | C | . | D | | E |
. . . | | . | | | |
. . . '----' . '----' '----'
. . . . . . . . . .
.----.----.----.----.----.----.----.----.----.----.----.----.
| A | |root| C | B | | D | | E | |
| | | | | | | | | | |
'----'----'----'----'----'----'----'----'----'----'----'----'
^ ^ ^ ^ ^
'-------------------'----'-------------------'----'-- free blocks
```

While this approach may sound complicated, the decision to not maintain a free
list greatly simplifies the overall design of littlefs. Unlike programming
languages, there are only a handful of data structures we need to traverse.
And block deallocation, which occurs nearly as often as block allocation,
is simply a noop. This "drop it on the floor" strategy greatly reduces the
complexity of managing on disk data structures, especially when handling
high-risk error conditions.

---

Our block allocator needs to find free blocks efficiently. You could traverse
through every block on storage and check each one against our filesystem tree;
however, the runtime would be abhorrent. We need to somehow collect multiple
blocks per traversal.

Looking at existing designs, some larger filesystems that use a similar "drop
it on the floor" strategy store a bitmap of the entire storage in [RAM]. This
works well because bitmaps are surprisingly compact. We can't use the same
strategy here, as it violates our constant RAM requirement, but we may be able
to modify the idea into a workable solution.

```
.----.----.----.----.----.----.----.----.----.----.----.----.
| A | |root| C | B | | D | | E | |
| | | | | | | | | | |
'----'----'----'----'----'----'----'----'----'----'----'----'
1 0 1 1 1 0 0 1 0 1 0 0
\---------------------------+----------------------------/
v
bitmap: 0xb94 (0b101110010100)
```

The block allocator in littlefs is a compromise between a disk-sized bitmap and
a brute force traversal. Instead of a bitmap the size of storage, we keep track
of a small, fixed-size bitmap called the lookahead buffer. During block
allocation, we take blocks from the lookahead buffer. If the lookahead buffer
is empty, we scan the filesystem for more free blocks, populating our lookahead
buffer. In each scan we use an increasing offset, circling the storage as
blocks are allocated.

Here's what it might look like to allocate 4 blocks on a decently busy
filesystem with a 32 bit lookahead and a total of 128 blocks (512 KiB
of storage if blocks are 4 KiB):
```
boot... lookahead:
fs blocks: fffff9fffffffffeffffffffffff0000
scanning... lookahead: fffff9ff
fs blocks: fffff9fffffffffeffffffffffff0000
alloc = 21 lookahead: fffffdff
fs blocks: fffffdfffffffffeffffffffffff0000
alloc = 22 lookahead: ffffffff
fs blocks: fffffffffffffffeffffffffffff0000
scanning... lookahead: fffffffe
fs blocks: fffffffffffffffeffffffffffff0000
alloc = 63 lookahead: ffffffff
fs blocks: ffffffffffffffffffffffffffff0000
scanning... lookahead: ffffffff
fs blocks: ffffffffffffffffffffffffffff0000
scanning... lookahead: ffffffff
fs blocks: ffffffffffffffffffffffffffff0000
scanning... lookahead: ffff0000
fs blocks: ffffffffffffffffffffffffffff0000
alloc = 112 lookahead: ffff8000
fs blocks: ffffffffffffffffffffffffffff8000
```

This lookahead approach has a runtime complexity of _O(n&sup2;)_ to completely
scan storage; however, bitmaps are surprisingly compact, and in practice only
one or two passes are usually needed to find free blocks. Additionally, the
performance of the allocator can be optimized by adjusting the block size or
size of the lookahead buffer, trading either write granularity or RAM for
allocator performance.

## Wear leveling

The block allocator has a secondary role: wear leveling.

Wear leveling is the process of distributing wear across all blocks in the
storage to prevent the filesystem from experiencing an early death due to
wear on a single block in the storage.

littlefs has two methods of protecting against wear:
1. Detection and recovery from bad blocks
2. Evenly distributing wear across dynamic blocks

---

Recovery from bad blocks doesn't actually have anything to do with the block
allocator itself. Instead, it relies on the ability of the filesystem to detect
and evict bad blocks when they occur.

In littlefs, it is fairly straightforward to detect bad blocks at write time.
All writes must be sourced by some form of data in RAM, so immediately after we
write to a block, we can read the data back and verify that it was written
correctly. If we find that the data on disk does not match the copy we have in
RAM, a write error has occurred and we most likely have a bad block.

Once we detect a bad block, we need to recover from it. In the case of write
errors, we have a copy of the corrupted data in RAM, so all we need to do is
evict the bad block, allocate a new, hopefully good block, and repeat the write
that previously failed.

The actual act of evicting the bad block and replacing it with a new block is
left up to the filesystem's copy-on-bounded-writes (CObW) data structures. One
property of CObW data structures is that any block can be replaced during a
COW operation. The bounded-writes part is normally triggered by a counter, but
nothing prevents us from triggering a COW operation as soon as we find a bad
block.

```
.----.
|root|
| |
'----'
v--' '----------------------v
.----. .----.
| A | | B |
| | | |
'----' '----'
. . v---' .
. . .----. .
. . | C | .
. . | | .
. . '----' .
. . . . .
.----.----.----.----.----.----.----.----.----.----.
| A |root| | C | B | |
| | | | | | |
'----'----'----'----'----'----'----'----'----'----'
update C
=>
.----.
|root|
| |
'----'
v--' '----------------------v
.----. .----.
| A | | B |
| | | |
'----' '----'
. . v---' .
. . .----. .
. . |bad | .
. . |blck| .
. . '----' .
. . . . .
.----.----.----.----.----.----.----.----.----.----.
| A |root| |bad | B | |
| | | |blck| | |
'----'----'----'----'----'----'----'----'----'----'
oh no! bad block! relocate C
=>
.----.
|root|
| |
'----'
v--' '----------------------v
.----. .----.
| A | | B |
| | | |
'----' '----'
. . v---' .
. . .----. .
. . |bad | .
. . |blck| .
. . '----' .
. . . . .
.----.----.----.----.----.----.----.----.----.----.
| A |root| |bad | B |bad | |
| | | |blck| |blck| |
'----'----'----'----'----'----'----'----'----'----'
--------->
oh no! bad block! relocate C
=>
.----.
|root|
| |
'----'
v--' '----------------------v
.----. .----.
| A | | B |
| | | |
'----' '----'
. . v---' .
. . .----. . .----.
. . |bad | . | C' |
. . |blck| . | |
. . '----' . '----'
. . . . . . .
.----.----.----.----.----.----.----.----.----.----.
| A |root| |bad | B |bad | C' | |
| | | |blck| |blck| | |
'----'----'----'----'----'----'----'----'----'----'
-------------->
successfully relocated C, update B
=>
.----.
|root|
| |
'----'
v--' '----------------------v
.----. .----.
| A | |bad |
| | |blck|
'----' '----'
. . v---' .
. . .----. . .----.
. . |bad | . | C' |
. . |blck| . | |
. . '----' . '----'
. . . . . . .
.----.----.----.----.----.----.----.----.----.----.
| A |root| |bad |bad |bad | C' | |
| | | |blck|blck|blck| | |
'----'----'----'----'----'----'----'----'----'----'
oh no! bad block! relocate B
=>
.----.
|root|
| |
'----'
v--' '----------------------v
.----. .----. .----.
| A | |bad | |bad |
| | |blck| |blck|
'----' '----' '----'
. . v---' . . .
. . .----. . .----. .
. . |bad | . | C' | .
. . |blck| . | | .
. . '----' . '----' .
. . . . . . . .
.----.----.----.----.----.----.----.----.----.----.
| A |root| |bad |bad |bad | C' |bad |
| | | |blck|blck|blck| |blck|
'----'----'----'----'----'----'----'----'----'----'
-------------->
oh no! bad block! relocate B
=>
.----.
|root|
| |
'----'
v--' '----------------------v
.----. .----. .----.
| A | | B' | |bad |
| | | | |blck|
'----' '----' '----'
. . . | . .---' .
. . . '--------------v-------------v
. . . . .----. . .----.
. . . . |bad | . | C' |
. . . . |blck| . | |
. . . . '----' . '----'
. . . . . . . . .
.----.----.----.----.----.----.----.----.----.----.
| A |root| B' | |bad |bad |bad | C' |bad |
| | | | |blck|blck|blck| |blck|
'----'----'----'----'----'----'----'----'----'----'
------------> ------------------
successfully relocated B, update root
=>
.----.
|root|
| |
'----'
v--' '--v
.----. .----.
| A | | B' |
| | | |
'----' '----'
. . . '---------------------------v
. . . . .----.
. . . . | C' |
. . . . | |
. . . . '----'
. . . . . .
.----.----.----.----.----.----.----.----.----.----.
| A |root| B' | |bad |bad |bad | C' |bad |
| | | | |blck|blck|blck| |blck|
'----'----'----'----'----'----'----'----'----'----'
```

We may find that the new block is also bad, but hopefully after repeating this
cycle we'll eventually find a new block where a write succeeds. If we don't,
that means that all blocks in our storage are bad, and we've reached the end of
our device's usable life. At this point, littlefs will return an "out of space"
error. This is technically true, as there are no more good blocks, but as an
added benefit it also matches the error condition expected by users of
dynamically sized data.

---

Read errors, on the other hand, are quite a bit more complicated. We don't have
a copy of the data lingering around in RAM, so we need a way to reconstruct the
original data even after it has been corrupted. One such mechanism for this is
[error-correction-codes (ECC)][wikipedia-ecc].

ECC is an extension to the idea of a checksum. Where a checksum such as CRC can
detect that an error has occurred in the data, ECC can detect and actually
correct some amount of errors. However, there is a limit to how many errors ECC
can detect: the [Hamming bound][wikipedia-hamming-bound]. As the number of
errors approaches the Hamming bound, we may still be able to detect errors, but
can no longer fix the data. If we've reached this point the block is
unrecoverable.

littlefs by itself does **not** provide ECC. The block nature and relatively
large footprint of ECC does not work well with the dynamically sized data of
filesystems, correcting errors without RAM is complicated, and ECC fits better
with the geometry of block devices. In fact, several NOR flash chips have extra
storage intended for ECC, and many NAND chips can even calculate ECC on the
chip itself.

In littlefs, ECC is entirely optional. Read errors can instead be prevented
proactively by wear leveling. But it's important to note that ECC can be used
at the block device level to modestly extend the life of a device. littlefs
respects any errors reported by the block device, allowing a block device to
provide additional aggressive error detection.

---

To avoid read errors, we need to be proactive, as opposed to reactive as we
were with write errors.

One way to do this is to detect when the number of errors in a block exceeds
some threshold, but is still recoverable. With ECC we can do this at write
time, and treat the error as a write error, evicting the block before fatal
read errors have a chance to develop.

A different, more generic strategy, is to proactively distribute wear across
all blocks in the storage, with the hope that no single block fails before the
rest of storage is approaching the end of its usable life. This is called
wear leveling.

Generally, wear leveling algorithms fall into one of two categories:

1. [Dynamic wear leveling][wikipedia-dynamic-wear-leveling], where we
distribute wear over "dynamic" blocks. The can be accomplished by
only considering unused blocks.

2. [Static wear leveling][wikipedia-static-wear-leveling], where we
distribute wear over both "dynamic" and "static" blocks. To make this work,
we need to consider all blocks, including blocks that already contain data.

As a tradeoff for code size and complexity, littlefs (currently) only provides
dynamic wear leveling. This is a best effort solution. Wear is not distributed
perfectly, but it is distributed among the free blocks and greatly extends the
life of a device.

On top of this, littlefs uses a statistical wear leveling algorithm. What this
means is that we don’t actively track wear, instead we rely on a uniform
distribution of wear across storage to approximate a dynamic wear leveling
algorithm. Despite the long name, this is actually a simplification of dynamic
wear leveling.

The uniform distribution of wear is left up to the block allocator, which
creates a uniform distribution in two parts. The easy part is when the device
is powered, in which case we allocate the blocks linearly, circling the device.
The harder part is what to do when the device loses power. We can't just
restart the allocator at the beginning of storage, as this would bias the wear.
Instead, we start the allocator as a random offset every time we mount the
filesystem. As long as this random offset is uniform, the combined allocation
pattern is also a uniform distribution.

![Cumulative wear distribution graph][wear-distribution-graph]

Initially, this approach to wear leveling looks like it creates a difficult
dependency on a power-independent random number generator, which must return
different random numbers on each boot. However, the filesystem is in a
relatively unique situation in that it is sitting on top of a large of amount
of entropy that persists across power loss.

We can actually use the data on disk to directly drive our random number
generator. In practice, this is implemented by xoring the checksums of each
metadata pair, which is already calculated to fetch and mount the filesystem.

```
.--------. \ probably random
.|metadata| | ^
|| | +-> crc ----------------------> xor
|| | | ^
|'--------' / |
'---|--|-' |
.-' '-------------------------. |
| | |
| .--------------> xor ------------> xor
| | ^ | ^
v crc crc v crc
.--------. \ ^ .--------. \ ^ .--------. \ ^
.|metadata|-|--|-->|metadata| | | .|metadata| | |
|| | +--' || | +--' || | +--'
|| | | || | | || | |
|'--------' / |'--------' / |'--------' /
'---|--|-' '----|---' '---|--|-'
.-' '-. | .-' '-.
v v v v v
.--------. .--------. .--------. .--------. .--------.
| data | | data | | data | | data | | data |
| | | | | | | | | |
| | | | | | | | | |
'--------' '--------' '--------' '--------' '--------'
```

Note that this random number generator is not perfect. It only returns unique
random numbers when the filesystem is modified. This is exactly what we want
for distributing wear in the allocator, but means this random number generator
is not useful for general use.

---

Together, bad block detection and dynamic wear leveling provide a best effort
solution for avoiding the early death of a filesystem due to wear. Importantly,
littlefs's wear leveling algorithm provides a key feature: You can increase the
life of a device simply by increasing the size of storage. And if more
aggressive wear leveling is desired, you can always combine littlefs with a
[flash translation layer (FTL)][wikipedia-ftl] to get a small power resilient
filesystem with static wear leveling.

## Files

Now that we have our building blocks out of the way, we can start looking at
our filesystem as a whole.

The first step: How do we actually store our files?

We've determined that CTZ skip-lists are pretty good at storing data compactly,
so following the precedent found in other filesystems we could give each file
a skip-list stored in a metadata pair that acts as an inode for the file.


```
.--------.
.|metadata|
|| |
|| |
|'--------'
'----|---'
v
.--------. .--------. .--------. .--------.
| data 0 |<-| data 1 |<-| data 2 |<-| data 3 |
| |<-| |--| | | |
| | | | | | | |
'--------' '--------' '--------' '--------'
```

However, this doesn't work well when files are small, which is common for
embedded systems. Compared to PCs, _all_ data in an embedded system is small.

Consider a small 4-byte file. With a two block metadata-pair and one block for
the CTZ skip-list, we find ourselves using a full 3 blocks. On most NOR flash
with 4 KiB blocks, this is 12 KiB of overhead. A ridiculous 3072x increase.

```
file stored as inode, 4 bytes costs ~12 KiB
.----------------. \
.| revision | |
||----------------| \ |
|| skiplist ---. +- metadata |
||----------------| | / 4x8 bytes |
|| checksum | | 32 bytes |
||----------------| | |
|| | | | +- metadata pair
|| v | | | 2x4 KiB
|| | | | 8 KiB
|| | | |
|| | | |
|| | | |
|'----------------' | |
'----------------' | /
.--------'
v
.----------------. \ \
| data | +- data |
|----------------| / 4 bytes |
| | |
| | |
| | |
| | +- data block
| | | 4 KiB
| | |
| | |
| | |
| | |
| | |
'----------------' /
```

We can make several improvements. First, instead of giving each file its own
metadata pair, we can store multiple files in a single metadata pair. One way
to do this is to directly associate a directory with a metadata pair (or a
linked list of metadata pairs). This makes it easy for multiple files to share
the directory's metadata pair for logging and reduces the collective storage
overhead.

The strict binding of metadata pairs and directories also gives users
direct control over storage utilization depending on how they organize their
directories.

```
multiple files stored in metadata pair, 4 bytes costs ~4 KiB
.----------------.
.| revision |
||----------------|
|| A name |
|| A skiplist -----.
||----------------| | \
|| B name | | +- metadata
|| B skiplist ---. | | 4x8 bytes
||----------------| | | / 32 bytes
|| checksum | | |
||----------------| | |
|| | | | |
|| v | | |
|'----------------' | |
'----------------' | |
.----------------' |
v v
.----------------. .----------------. \ \
| A data | | B data | +- data |
| | |----------------| / 4 bytes |
| | | | |
| | | | |
| | | | |
| | | | + data block
| | | | | 4 KiB
| | | | |
|----------------| | | |
| | | | |
| | | | |
| | | | |
'----------------' '----------------' /
```

The second improvement we can make is noticing that for very small files, our
attempts to use CTZ skip-lists for compact storage backfires. Metadata pairs
have a ~4x storage cost, so if our file is smaller than 1/4 the block size,
there's actually no benefit in storing our file outside of our metadata pair.

In this case, we can store the file directly in our directory's metadata pair.
We call this an inline file, and it allows a directory to store many small
files quite efficiently. Our previous 4 byte file now only takes up a
theoretical 16 bytes on disk.

```
inline files stored in metadata pair, 4 bytes costs ~16 bytes
.----------------.
.| revision |
||----------------|
|| A name |
|| A skiplist ---.
||----------------| | \
|| B name | | +- data
|| B data | | | 4x4 bytes
||----------------| | / 16 bytes
|| checksum | |
||----------------| |
|| | | |
|| v | |
|'----------------' |
'----------------' |
.---------'
v
.----------------.
| A data |
| |
| |
| |
| |
| |
| |
| |
|----------------|
| |
| |
| |
'----------------'
```

Once the file exceeds 1/4 the block size, we switch to a CTZ skip-list. This
means that our files never use more than 4x storage overhead, decreasing as
the file grows in size.

![File storage cost graph][file-cost-graph]

## Directories

Now we just need directories to store our files. As mentioned above we want
a strict binding of directories and metadata pairs, but there are a few
complications we need to sort out.

On their own, each directory is a linked-list of metadata pairs. This lets us
store an unlimited number of files in each directory, and we don't need to
worry about the runtime complexity of unbounded logs. We can store other
directory pointers in our metadata pairs, which gives us a directory tree, much
like what you find on other filesystems.

```
.--------.
.| root |
|| |
|| |
|'--------'
'---|--|-'
.-' '-------------------------.
v v
.--------. .--------. .--------.
.| dir A |------->| dir A | .| dir B |
|| | || | || |
|| | || | || |
|'--------' |'--------' |'--------'
'---|--|-' '----|---' '---|--|-'
.-' '-. | .-' '-.
v v v v v
.--------. .--------. .--------. .--------. .--------.
| file C | | file D | | file E | | file F | | file G |
| | | | | | | | | |
| | | | | | | | | |
'--------' '--------' '--------' '--------' '--------'
```

The main complication is, once again, traversal with a constant amount of
[RAM]. The directory tree is a tree, and the unfortunate fact is you can't
traverse a tree with constant RAM.

Fortunately, the elements of our tree are metadata pairs, so unlike CTZ
skip-lists, we're not limited to strict COW operations. One thing we can do is
thread a linked-list through our tree, explicitly enabling cheap traversal
over the entire filesystem.

```
.--------.
.| root |-.
|| | |
.-------|| |-'
| |'--------'
| '---|--|-'
| .-' '-------------------------.
| v v
| .--------. .--------. .--------.
'->| dir A |------->| dir A |------->| dir B |
|| | || | || |
|| | || | || |
|'--------' |'--------' |'--------'
'---|--|-' '----|---' '---|--|-'
.-' '-. | .-' '-.
v v v v v
.--------. .--------. .--------. .--------. .--------.
| file C | | file D | | file E | | file F | | file G |
| | | | | | | | | |
| | | | | | | | | |
'--------' '--------' '--------' '--------' '--------'
```

Unfortunately, not sticking to pure COW operations creates some problems. Now,
whenever we want to manipulate the directory tree, multiple pointers need to be
updated. If you're familiar with designing atomic data structures this should
set off a bunch of red flags.

To work around this, our threaded linked-list has a bit of leeway. Instead of
only containing metadata pairs found in our filesystem, it is allowed to
contain metadata pairs that have no parent because of a power loss. These are
called orphaned metadata pairs.

With the possibility of orphans, we can build power loss resilient operations
that maintain a filesystem tree threaded with a linked-list for traversal.

Adding a directory to our tree:

```
.--------.
.| root |-.
|| | |
.-------|| |-'
| |'--------'
| '---|--|-'
| .-' '-.
| v v
| .--------. .--------.
'->| dir A |->| dir C |
|| | || |
|| | || |
|'--------' |'--------'
'--------' '--------'
allocate dir B
=>
.--------.
.| root |-.
|| | |
.-------|| |-'
| |'--------'
| '---|--|-'
| .-' '-.
| v v
| .--------. .--------.
'->| dir A |--->| dir C |
|| | .->| |
|| | | || |
|'--------' | |'--------'
'--------' | '--------'
|
.--------. |
.| dir B |-'
|| |
|| |
|'--------'
'--------'
insert dir B into threaded linked-list, creating an orphan
=>
.--------.
.| root |-.
|| | |
.-------|| |-'
| |'--------'
| '---|--|-'
| .-' '-------------.
| v v
| .--------. .--------. .--------.
'->| dir A |->| dir B |->| dir C |
|| | || orphan!| || |
|| | || | || |
|'--------' |'--------' |'--------'
'--------' '--------' '--------'
add dir B to parent directory
=>
.--------.
.| root |-.
|| | |
.-------------|| |-'
| |'--------'
| '--|-|-|-'
| .------' | '-------.
| v v v
| .--------. .--------. .--------.
'->| dir A |->| dir B |->| dir C |
|| | || | || |
|| | || | || |
|'--------' |'--------' |'--------'
'--------' '--------' '--------'
```

Removing a directory:

```
.--------.
.| root |-.
|| | |
.-------------|| |-'
| |'--------'
| '--|-|-|-'
| .------' | '-------.
| v v v
| .--------. .--------. .--------.
'->| dir A |->| dir B |->| dir C |
|| | || | || |
|| | || | || |
|'--------' |'--------' |'--------'
'--------' '--------' '--------'
remove dir B from parent directory, creating an orphan
=>
.--------.
.| root |-.
|| | |
.-------|| |-'
| |'--------'
| '---|--|-'
| .-' '-------------.
| v v
| .--------. .--------. .--------.
'->| dir A |->| dir B |->| dir C |
|| | || orphan!| || |
|| | || | || |
|'--------' |'--------' |'--------'
'--------' '--------' '--------'
remove dir B from threaded linked-list, returning dir B to free blocks
=>
.--------.
.| root |-.
|| | |
.-------|| |-'
| |'--------'
| '---|--|-'
| .-' '-.
| v v
| .--------. .--------.
'->| dir A |->| dir C |
|| | || |
|| | || |
|'--------' |'--------'
'--------' '--------'
```

In addition to normal directory tree operations, we can use orphans to evict
blocks in a metadata pair when the block goes bad or exceeds its allocated
erases. If we lose power while evicting a metadata block we may end up with
a situation where the filesystem references the replacement block while the
threaded linked-list still contains the evicted block. We call this a
half-orphan.

```
.--------.
.| root |-.
|| | |
.-------------|| |-'
| |'--------'
| '--|-|-|-'
| .------' | '-------.
| v v v
| .--------. .--------. .--------.
'->| dir A |->| dir B |->| dir C |
|| | || | || |
|| | || | || |
|'--------' |'--------' |'--------'
'--------' '--------' '--------'
try to write to dir B
=>
.--------.
.| root |-.
|| | |
.----------------|| |-'
| |'--------'
| '-|-||-|-'
| .--------' || '-----.
| v |v v
| .--------. .--------. .--------.
'->| dir A |---->| dir B |->| dir C |
|| |-. | | || |
|| | | | | || |
|'--------' | '--------' |'--------'
'--------' | v '--------'
| .--------.
'->| dir B |
| bad |
| block! |
'--------'
oh no! bad block detected, allocate replacement
=>
.--------.
.| root |-.
|| | |
.----------------|| |-'
| |'--------'
| '-|-||-|-'
| .--------' || '-------.
| v |v v
| .--------. .--------. .--------.
'->| dir A |---->| dir B |--->| dir C |
|| |-. | | .->| |
|| | | | | | || |
|'--------' | '--------' | |'--------'
'--------' | v | '--------'
| .--------. |
'->| dir B | |
| bad | |
| block! | |
'--------' |
|
.--------. |
| dir B |--'
| |
| |
'--------'
insert replacement in threaded linked-list, creating a half-orphan
=>
.--------.
.| root |-.
|| | |
.----------------|| |-'
| |'--------'
| '-|-||-|-'
| .--------' || '-------.
| v |v v
| .--------. .--------. .--------.
'->| dir A |---->| dir B |--->| dir C |
|| |-. | | .->| |
|| | | | | | || |
|'--------' | '--------' | |'--------'
'--------' | v | '--------'
| .--------. |
| | dir B | |
| | bad | |
| | block! | |
| '--------' |
| |
| .--------. |
'->| dir B |--'
| half |
| orphan!|
'--------'
fix reference in parent directory
=>
.--------.
.| root |-.
|| | |
.-------------|| |-'
| |'--------'
| '--|-|-|-'
| .------' | '-------.
| v v v
| .--------. .--------. .--------.
'->| dir A |->| dir B |->| dir C |
|| | || | || |
|| | || | || |
|'--------' |'--------' |'--------'
'--------' '--------' '--------'
```

Finding orphans and half-orphans is expensive, requiring a _O(n&sup2;)_
comparison of every metadata pair with every directory entry. But the tradeoff
is a power resilient filesystem that works with only a bounded amount of RAM.
Fortunately, we only need to check for orphans on the first allocation after
boot, and a read-only littlefs can ignore the threaded linked-list entirely.

If we only had some sort of global state, then we could also store a flag and
avoid searching for orphans unless we knew we were specifically interrupted
while manipulating the directory tree (foreshadowing!).

## The move problem

We have one last challenge: the move problem. Phrasing the problem is simple:

How do you atomically move a file between two directories?

In littlefs we can atomically commit to directories, but we can't create
an atomic commit that spans multiple directories. The filesystem must go
through a minimum of two distinct states to complete a move.

To make matters worse, file moves are a common form of synchronization for
filesystems. As a filesystem designed for power-loss, it's important we get
atomic moves right.

So what can we do?

- We definitely can't just let power-loss result in duplicated or lost files.
This could easily break users' code and would only reveal itself in extreme
cases. We were only able to be lazy about the threaded linked-list because
it isn't user facing and we can handle the corner cases internally.

- Some filesystems propagate COW operations up the tree until a common parent
is found. Unfortunately this interacts poorly with our threaded tree and
brings back the issue of upward propagation of wear.

- In a previous version of littlefs we tried to solve this problem by going
back and forth between the source and destination, marking and unmarking the
file as moving in order to make the move atomic from the user perspective.
This worked, but not well. Finding failed moves was expensive and required
a unique identifier for each file.

In the end, solving the move problem required creating a new mechanism for
sharing knowledge between multiple metadata pairs. In littlefs this led to the
introduction of a mechanism called "global state".

---

Global state is a small set of state that can be updated from _any_ metadata
pair. Combining global state with metadata pairs' ability to update multiple
entries in one commit gives us a powerful tool for crafting complex atomic
operations.

How does global state work?

Global state exists as a set of deltas that are distributed across the metadata
pairs in the filesystem. The actual global state can be built out of these
deltas by xoring together all of the deltas in the filesystem.

```
.--------. .--------. .--------. .--------. .--------.
.| |->| gdelta |->| |->| gdelta |->| gdelta |
|| | || 0x23 | || | || 0xff | || 0xce |
|| | || | || | || | || |
|'--------' |'--------' |'--------' |'--------' |'--------'
'--------' '----|---' '--------' '----|---' '----|---'
v v v
0x00 --> xor ------------------> xor ------> xor --> gstate 0x12
```

To update the global state from a metadata pair, we take the global state we
know and xor it with both our changes and any existing delta in the metadata
pair. Committing this new delta to the metadata pair commits the changes to
the filesystem's global state.

```
.--------. .--------. .--------. .--------. .--------.
.| |->| gdelta |->| |->| gdelta |->| gdelta |
|| | || 0x23 | || | || 0xff | || 0xce |
|| | || | || | || | || |
|'--------' |'--------' |'--------' |'--------' |'--------'
'--------' '----|---' '--------' '--|---|-' '----|---'
v v | v
0x00 --> xor ----------------> xor -|------> xor --> gstate = 0x12
| |
| |
change gstate to 0xab --> xor <------------|--------------------------'
=> | v
'------------> xor
|
v
.--------. .--------. .--------. .--------. .--------.
.| |->| gdelta |->| |->| gdelta |->| gdelta |
|| | || 0x23 | || | || 0x46 | || 0xce |
|| | || | || | || | || |
|'--------' |'--------' |'--------' |'--------' |'--------'
'--------' '----|---' '--------' '----|---' '----|---'
v v v
0x00 --> xor ------------------> xor ------> xor --> gstate = 0xab
```

To make this efficient, we always keep a copy of the global state in RAM. We
only need to iterate over our metadata pairs and build the global state when
the filesystem is mounted.

You may have noticed that global state is very expensive. We keep a copy in
RAM and a delta in an unbounded number of metadata pairs. Even if we reset
the global state to its initial value, we can't easily clean up the deltas on
disk. For this reason, it's very important that we keep the size of global
state bounded and extremely small. But, even with a strict budget, global
state is incredibly valuable.

---

Now we can solve the move problem. We can create global state describing our
move atomically with the creation of the new file, and we can clear this move
state atomically with the removal of the old file.

```
.--------. gstate = no move
.| root |-.
|| | |
.-------------|| |-'
| |'--------'
| '--|-|-|-'
| .------' | '-------.
| v v v
| .--------. .--------. .--------.
'->| dir A |->| dir B |->| dir C |
|| | || | || |
|| | || | || |
|'--------' |'--------' |'--------'
'----|---' '--------' '--------'
v
.--------.
| file D |
| |
| |
'--------'
begin move, add reference in dir C, change gstate to have move
=>
.--------. gstate = moving file D in dir A (m1)
.| root |-.
|| | |
.-------------|| |-'
| |'--------'
| '--|-|-|-'
| .------' | '-------.
| v v v
| .--------. .--------. .--------.
'->| dir A |->| dir B |->| dir C |
|| | || | || gdelta |
|| | || | || =m1 |
|'--------' |'--------' |'--------'
'----|---' '--------' '----|---'
| .----------------'
v v
.--------.
| file D |
| |
| |
'--------'
complete move, remove reference in dir A, change gstate to no move
=>
.--------. gstate = no move (m1^~m1)
.| root |-.
|| | |
.-------------|| |-'
| |'--------'
| '--|-|-|-'
| .------' | '-------.
| v v v
| .--------. .--------. .--------.
'->| dir A |->| dir B |->| dir C |
|| gdelta | || | || gdelta |
|| =~m1 | || | || =m1 |
|'--------' |'--------' |'--------'
'--------' '--------' '----|---'
v
.--------.
| file D |
| |
| |
'--------'
```


If, after building our global state during mount, we find information
describing an ongoing move, we know we lost power during a move and the file
is duplicated in both the source and destination directories. If this happens,
we can resolve the move using the information in the global state to remove
one of the files.

```
.--------. gstate = moving file D in dir A (m1)
.| root |-. ^
|| |------------> xor
.---------------|| |-' ^
| |'--------' |
| '--|-|-|-' |
| .--------' | '---------. |
| | | | |
| | .----------> xor --------> xor
| v | v ^ v ^
| .--------. | .--------. | .--------. |
'->| dir A |-|->| dir B |-|->| dir C | |
|| |-' || |-' || gdelta |-'
|| | || | || =m1 |
|'--------' |'--------' |'--------'
'----|---' '--------' '----|---'
| .---------------------'
v v
.--------.
| file D |
| |
| |
'--------'
```

We can also move directories the same way we move files. There is the threaded
linked-list to consider, but leaving the threaded linked-list unchanged works
fine as the order doesn't really matter.

```
.--------. gstate = no move (m1^~m1)
.| root |-.
|| | |
.-------------|| |-'
| |'--------'
| '--|-|-|-'
| .------' | '-------.
| v v v
| .--------. .--------. .--------.
'->| dir A |->| dir B |->| dir C |
|| gdelta | || | || gdelta |
|| =~m1 | || | || =m1 |
|'--------' |'--------' |'--------'
'--------' '--------' '----|---'
v
.--------.
| file D |
| |
| |
'--------'
begin move, add reference in dir C, change gstate to have move
=>
.--------. gstate = moving dir B in root (m1^~m1^m2)
.| root |-.
|| | |
.--------------|| |-'
| |'--------'
| '--|-|-|-'
| .-------' | '----------.
| v | v
| .--------. | .--------.
'->| dir A |-. | .->| dir C |
|| gdelta | | | | || gdelta |
|| =~m1 | | | | || =m1^m2 |
|'--------' | | | |'--------'
'--------' | | | '---|--|-'
| | .-------' |
| v v | v
| .--------. | .--------.
'->| dir B |-' | file D |
|| | | |
|| | | |
|'--------' '--------'
'--------'
complete move, remove reference in root, change gstate to no move
=>
.--------. gstate = no move (m1^~m1^m2^~m2)
.| root |-.
|| gdelta | |
.-----------|| =~m2 |-'
| |'--------'
| '---|--|-'
| .-----' '-----.
| v v
| .--------. .--------.
'->| dir A |-. .->| dir C |
|| gdelta | | | || gdelta |
|| =~m1 | | '-|| =m1^m2 |-------.
|'--------' | |'--------' |
'--------' | '---|--|-' |
| .-' '-. |
| v v |
| .--------. .--------. |
'->| dir B |--| file D |-'
|| | | |
|| | | |
|'--------' '--------'
'--------'
```

Global state gives us a powerful tool we can use to solve the move problem.
And the result is surprisingly performant, only needing the minimum number
of states and using the same number of commits as a naive move. Additionally,
global state gives us a bit of persistent state we can use for some other
small improvements.

## Conclusion

And that's littlefs, thanks for reading!


[wikipedia-flash]: https://en.wikipedia.org/wiki/Flash_memory
[wikipedia-sna]: https://en.wikipedia.org/wiki/Serial_number_arithmetic
[wikipedia-crc]: https://en.wikipedia.org/wiki/Cyclic_redundancy_check
[wikipedia-cow]: https://en.wikipedia.org/wiki/Copy-on-write
[wikipedia-B-tree]: https://en.wikipedia.org/wiki/B-tree
[wikipedia-B+-tree]: https://en.wikipedia.org/wiki/B%2B_tree
[wikipedia-skip-list]: https://en.wikipedia.org/wiki/Skip_list
[wikipedia-ctz]: https://en.wikipedia.org/wiki/Count_trailing_zeros
[wikipedia-ecc]: https://en.wikipedia.org/wiki/Error_correction_code
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another large deleted chunk that doesn't make sense contextually …

[wikipedia-skip-list]: https://en.wikipedia.org/wiki/Skip_list
[wikipedia-ctz]: https://en.wikipedia.org/wiki/Count_trailing_zeros
[wikipedia-ecc]: https://en.wikipedia.org/wiki/Error_correction_code
@@ -2124,50 +2143,26 @@
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another lone diff marker …

@geky
Copy link
Member

geky commented Oct 14, 2025

Opened 65 other pull requests in 17 repositories

All on October 9th 🤔

This can't not be AI.

At least from looking at the rendered diff, we don't have to worry about losing our jobs to AI anytime soon.


![sum,i,0->n(ctz(i)+1) = 2n-popcount(n)][ctz-formula4]
$$
sum,i,0->n(ctz(i)+1) = 2n-popcount(n)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't even LaTeX...

@geky geky added the needs ai revolution the ai revolution has been postponed label Oct 14, 2025
@geky geky closed this Oct 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

needs ai revolution the ai revolution has been postponed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants