Skip to content

Conversation

@edg-l
Copy link
Contributor

@edg-l edg-l commented Oct 22, 2025

Currently the upstream branch has

trie_cache: Arc<Mutex<Arc<TrieLayerCache>>>,,

where TrieLayerCache has the field layers: BTreeMap<H256, Arc<TrieLayer>>, however from the changes it suggests the author wanted to use Arc<BTreeMap<H256, Arc>> here.

And the stores have Arc<Mutex<Arc<TrieLayerCache>>>,

There are too many Arc layers here, it made some arc atomic swaps complex, for example to update the trielayercache one would need lock trie_cache, clone the arc, then clone the trielayrcache, then if the layers btreemap had a surrounding arc, another btreemap clone, get the trielayer, clone it, update the trie layer, put it in the cloned btree, update the layers btreemap, then update the trielayercache

Instead i made TrieLayerCache completely clone safe and the method have all the complexity of managing the arc swaps.

So users of the cache simply store a trie_cache: Arc<TrieLayerCache>, and call get, put_batch, get_commitable, etc.

If this design seems good, there are still things to do:

  • Some tests dont pass, however i'm not entirely sure its due to the rocksdb implementation, since tests use the memorydb which doesnt use the new background mechanism.

@github-actions
Copy link

github-actions bot commented Oct 22, 2025

Lines of code report

Total lines added: 20
Total lines removed: 11
Total lines changed: 31

Detailed view
+---------------------------------------------+-------+------+
| File                                        | Lines | Diff |
+---------------------------------------------+-------+------+
| ethrex/crates/storage/store_db/in_memory.rs | 630   | +1   |
+---------------------------------------------+-------+------+
| ethrex/crates/storage/store_db/rocksdb.rs   | 1476  | -11  |
+---------------------------------------------+-------+------+
| ethrex/crates/storage/trie_db/layering.rs   | 156   | +19  |
+---------------------------------------------+-------+------+

last_id: usize,
layers: BTreeMap<H256, Arc<TrieLayer>>,
last_id: Arc<AtomicUsize>,
layers: Arc<Mutex<BTreeMap<H256, Arc<TrieLayer>>>>,
Copy link
Contributor Author

@edg-l edg-l Oct 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think the mutex surrounding the btreemap is worth it (as is done now), since its only used to do the get and clone ASAP to unlock again. using an Arc<Mutex<Arc>> would be too complex IMHO

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one forces the tree traversal to be done with the mutex taken. We want to avoid locking in the read path, that's why we would rather have a single outer lock and an Arc clone there.

parent,
id: self.last_id,
}
id: last_id + 1,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this +1 is to maintain the same ids used before

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These kind of comments belong in the code.

}
db.write(batch)?;
// Phase 3: update diff layers with the removal of bottom layer.
*trie_cache.lock().map_err(|_| StoreError::LockError)? = Arc::new(trie_mut);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This mutation needs to happen at the end. Otherwise you lose the bottom layer BEFORE there is a backing disk layer with the data visible to users..

@@ -1,3 +1,5 @@
#![allow(clippy::type_complexity)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't use global allows.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants