-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(validator_node): new state store implementation using RocksDB #1192
base: development
Are you sure you want to change the base?
Conversation
Test Results (CI)633 tests +40 633 ✅ +40 3h 40m 48s ⏱️ + 2h 6m 34s Results for commit f62383c. ± Comparison against base commit 619c78e. This pull request removes 1 and adds 41 tests. Note that renamed tests count towards both.
♻️ This comment has been updated with latest results. |
WalkthroughThis PR integrates a new RocksDB state store package into the workspace. It adds a full suite of models and utilities for managing blockchain state (blocks, transactions, votes, substates, etc.) using RocksDB. The SQLite state store package has its description updated. Several consensus model types have been updated to derive serialization/deserialization traits and now include new setter methods. In addition, the PR introduces a comprehensive set of tests and helper functions covering operations such as block diffs, missing transactions, quorum certificates, and more, ensuring both RocksDB and SQLite implementations are well validated. Changes
Sequence Diagram(s)sequenceDiagram
participant C as Client
participant S as RocksDbStateStore
participant W as WriteTransaction
C->>S: connect(path)
S-->>C: State store instance
C->>S: create_write_tx()
S-->>W: Write transaction created
W->>W: Perform operations (insert/update/delete)
W->>S: commit() / rollback()
S-->>C: Operation result
Poem
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
🧹 Nitpick comments (91)
dan_layer/storage/src/consensus_models/quorum_certificate.rs (1)
196-198
: Minor inconsistency in parameter namingThe parameter
is_shared_processed
in the setter method doesn't match the field nameis_shares_processed
. This is a small typo that could lead to confusion.- pub fn set_is_shares_processed(&mut self, is_shared_processed: bool) { - self.is_shares_processed = is_shared_processed; + pub fn set_is_shares_processed(&mut self, is_shares_processed: bool) { + self.is_shares_processed = is_shares_processed;dan_layer/storage/src/consensus_models/state_tree_diff.rs (1)
16-16
: Consider adding serialization support to PendingShardStateTreeDiff for consistency.While VersionedStateHashTreeDiff has been updated with Serialize/Deserialize traits, PendingShardStateTreeDiff hasn't received the same update. Consider adding these traits for consistency if this struct needs to be serialized for the RocksDB implementation.
-#[derive(Debug, Clone, Default)] +#[derive(Debug, Clone, Default, Serialize, Deserialize)] pub struct PendingShardStateTreeDiff { pub version: Version, pub diff: StateHashTreeDiff<Version>, }dan_layer/state_store_rocksdb/src/model/traits.rs (1)
69-349
: Ensure safe key handling and consider edge cases.
- Using
String::from_utf8().unwrap()
to rebuild keys can cause panics if stored keys are not valid UTF-8. If binary keys are a possibility, consider hex or base64 encoding, and handle decoding errors gracefully.- Repeated patterns for iterating and deleting could be simplified, or the iteration could handle large data sets more cautiously to avoid potential performance bottlenecks.
- Since these implementations handle low-level data operations, ensure that concurrency with RocksDB transactions is managed properly, especially for large-scale usage scenarios.
dan_layer/state_store_rocksdb/src/error.rs (2)
58-68
: Consider adding more context to encoding/decoding error conversionsThe conversion from
RocksDbStorageError
toStorageError
for encoding/decoding errors currently uses empty strings for theoperation
anditem
fields. Adding more context would improve debugging capabilities.RocksDbStorageError::EncodeError { source } => StorageError::EncodingError { - operation: "", - item: "", + operation: "rocksdb_encode", + item: "model_data", details: source.to_string(), }, RocksDbStorageError::DecodeError { source } => StorageError::DecodingError { - operation: "", - item: "", + operation: "rocksdb_decode", + item: "model_data", details: source.to_string(), },
72-74
: Explicit pattern matching for all variants would be more robustThe catch-all pattern (
other => ...
) seems unnecessary as all variants should be explicitly matched above. This could hide errors if new variants are added but not properly handled.- other => StorageError::General { - details: other.to_string(), - }, + RocksDbStorageError::NotAllTransactionsFound { operation, details } => StorageError::General { + details: format!("[{}] Not all queried transactions were found: {}", operation, details), + },dan_layer/state_store_rocksdb/src/model/last_sent_vote.rs (1)
1-1
: Incorrect copyright yearThe copyright year is set to 2025, but it should be 2024.
-// Copyright 2025. The Tari Project +// Copyright 2024. The Tari Projectdan_layer/state_store_rocksdb/Cargo.toml (2)
11-12
: Address the TODO comment about FixedHash dependency.The TODO comment indicates that
tari_common_types
is required specifically forFixedHash
. Consider either:
- Removing this comment if the dependency is fully justified
- Creating a specific issue to track dependency refactoring
- Investigating if a more targeted dependency could be used
Dependencies should ideally be deliberately chosen rather than included with TODOs.
21-30
: Consider pinning specific dependency versions for better reproducibility.While using workspace-level versions is appropriate for internal dependencies, consider explicitly pinning versions for critical external dependencies like
rocksdb
to ensure consistent build behavior, especially since RocksDB is the core of this new implementation.dan_layer/storage_tests/src/foreign_proposals.rs (2)
67-70
: Create tracking issues for unimplemented test functionality.The commented code indicates that the "locked_block" functionality is not yet implemented. To ensure this doesn't get overlooked, consider creating a tracking issue and referencing it in the comment.
73-75
: Create tracking issues for unimplemented test functionality.Similar to the previous comment, this TODO waits for the "get_block_ids_with_commands_between" implementation. Consider creating a tracking issue to ensure this isn't forgotten.
dan_layer/state_store_rocksdb/src/model/vote.rs (2)
1-22
: Update copyright year to current year.The copyright notice indicates 2025, which is in the future. Consider changing to 2024 or the actual year of creation.
-// Copyright 2025. The Tari Project +// Copyright 2024. The Tari Project
28-35
: Add documentation for VoteModel.Consider adding documentation comments to explain the purpose of the struct and its methods. This would enhance code maintainability and make it easier for other developers to understand the purpose of this model.
+/// A RocksDB model for storing Vote objects. pub struct VoteModel {} impl VoteModel { + /// Generates a unique key for a vote based on block ID and sender. + /// + /// # Arguments + /// * `block_id` - The ID of the block this vote is associated with + /// * `sender_leaf_hash_opt` - Optional hash of the sender pub fn key_from_block_and_sender(block_id: &BlockId, sender_leaf_hash_opt: Option<&FixedHash>) -> String {dan_layer/state_store_rocksdb/src/model/epoch_checkpoint.rs (3)
1-22
: Update copyright year to current year.The copyright notice indicates 2025, which is in the future. Consider changing to 2024 or the actual year of creation.
-// Copyright 2025. The Tari Project +// Copyright 2024. The Tari Project
28-34
: Add documentation for EpochCheckpointModel.Consider adding documentation comments to explain the purpose of this struct and its methods for better code maintainability and developer understanding.
+/// A RocksDB model for storing EpochCheckpoint objects. pub struct EpochCheckpointModel {} impl EpochCheckpointModel { + /// Generates a unique key for an epoch checkpoint based on its epoch number. + /// + /// # Arguments + /// * `epoch` - The epoch number to generate a key for pub fn key_from_epoch(epoch: Epoch) -> String {
40-41
: Consider consistent key prefix formatting.For maintainability and readability, consider using a consistent format for key prefixes across all models. Other models use underscores between words (like "foreignparkedblocks"), but this one uses no separators.
fn key_prefix() -> &'static str { - "epochcheckpoints" + "epoch_checkpoints" }dan_layer/state_store_rocksdb/src/model/state_tree.rs (4)
1-22
: Update copyright year to current year.The copyright notice indicates 2025, which is in the future. Consider changing to 2024 or the actual year of creation.
-// Copyright 2025. The Tari Project +// Copyright 2024. The Tari Project
29-34
: Add documentation for StateTreeModelData.The struct lacks documentation explaining its purpose and the meaning of its fields. Adding documentation would enhance code maintainability.
+/// Represents a node in the state tree stored in RocksDB. +/// +/// This structure contains the shard identifier, node key, and the actual node data. #[derive(Debug, Clone, Serialize, Deserialize)] pub struct StateTreeModelData { + /// The shard this state tree node belongs to pub shard: Shard, + /// The key identifying this node in the tree pub key: NodeKey, + /// The actual node data with version information pub node: Node<Version>, }
36-42
: Add documentation for StateTreeModel.Consider adding documentation comments to explain the purpose of this struct and its methods for better code maintainability.
+/// A RocksDB model for managing state tree nodes. pub struct StateTreeModel {} impl StateTreeModel { + /// Generates a unique key for a state tree node based on its shard and node key. + /// + /// # Arguments + /// * `shard` - The shard identifier + /// * `node` - The node key in the state tree pub fn key_from_shard_and_node(shard: Shard, node: &NodeKey) -> String {
48-49
: Consider consistent key prefix formatting.For maintainability and readability, consider using a consistent format for key prefixes across all models. Consider using "state_tree" instead of "statetree" to maintain consistency with other models that use underscores.
fn key_prefix() -> &'static str { - "statetree" + "state_tree" }dan_layer/state_store_rocksdb/src/model/foreign_parked_blocks.rs (3)
1-22
: Update copyright year to current year.The copyright notice indicates 2025, which is in the future. Consider changing to 2024 or the actual year of creation.
-// Copyright 2025. The Tari Project +// Copyright 2024. The Tari Project
27-33
: Add documentation for ForeignParkedBlockModel.Consider adding documentation comments to explain the purpose of this struct and its methods for better code maintainability and developer understanding.
+/// A RocksDB model for storing foreign parked block proposals. pub struct ForeignParkedBlockModel {} impl ForeignParkedBlockModel { + /// Generates a unique key for a foreign parked block based on its block ID. + /// + /// # Arguments + /// * `block_id` - The ID of the block pub fn key_from_block_id(block_id: &BlockId) -> String {
38-40
: Consider consistent key prefix formatting.For maintainability and readability, consider using a consistent format for key prefixes across all models with underscores between words.
fn key_prefix() -> &'static str { - "foreignparkedblocks" + "foreign_parked_blocks" }dan_layer/storage_tests/src/substates.rs (1)
49-164
: Comprehensive substate operation testing.This function provides thorough testing of substate operations including creation, retrieval, version management, and substate lifecycle. The test cases effectively verify both basic operations and more complex scenarios like version handling.
Consider adding tests for edge cases such as:
- Retrieving non-existent substates
- Error handling for invalid inputs
- Concurrent modification scenarios
dan_layer/storage_tests/src/misc.rs (2)
1-1
: Incorrect copyright year.The copyright year is set to 2025, which is ahead of the current year (2024).
-// Copyright 2025 The Tari Project +// Copyright 2024 The Tari Project
51-52
: Function is overly complex.The
miscellaneous_operations
function is quite large (marked with#[allow(clippy::too_many_lines)]
) and tests multiple unrelated operations. For better maintainability and clarity, consider breaking this down into smaller, more focused test functions.You could organize tests by functionality categories like:
- State tracking tests (LastVoted, LastExecuted, etc.)
- Block-related tests (LockedBlock, LeafBlock, etc.)
- Counter tests
- Tree node tests
dan_layer/storage_tests/src/missing_transactions.rs (2)
1-1
: Incorrect copyright year.The copyright year is set to 2025, which is ahead of the current year (2024).
-// Copyright 2025 The Tari Project +// Copyright 2024 The Tari Project
60-73
: Robust missing transaction operation testing.The implementation provides good coverage of the basic flow for missing transaction tracking: insertion, querying, and removal. The assertions correctly verify the expected state after each operation.
Consider enhancing test coverage by adding:
- Tests for inserting and removing multiple transactions
- Error handling tests (e.g., removing a non-existent transaction)
- Edge case testing (e.g., behavior with empty transaction lists)
dan_layer/storage_tests/src/quorum_certificates.rs (2)
1-1
: Incorrect copyright year.The copyright year is set to 2025, which is ahead of the current year (2024).
-// Copyright 2025 The Tari Project +// Copyright 2024 The Tari Project
27-68
: Well-structured quorum certificate testing.The test function thoroughly covers certificate insertion and retrieval operations. Good use of assertions to verify the expected behavior.
Consider adding tests for:
- Error handling (e.g., retrieving non-existent certificates)
- Certificate updates
- Certificate deletion (if supported)
- Querying by additional criteria like epoch or height
dan_layer/storage_tests/src/blocks.rs (5)
202-203
: Add a comment explaining why a block isn't considered its own ancestorThe test verifies that
blocks_is_ancestor(block2.id(), block2.id())
returnsfalse
, indicating a block is not considered an ancestor of itself. This is a non-obvious design decision that would benefit from a brief explanatory comment.
231-235
: Address TODOs for comprehensive test coverageThere are TODO comments indicating missing test cases for longer parent chains and blocks with multiple children. Implementing these tests would provide more comprehensive coverage of the parent-child relationship logic.
Do you want me to generate implementations for these test cases?
264-265
: The use of#[allow(clippy::too_many_lines)]
should be addressedInstead of suppressing the clippy warning, consider refactoring the test function into smaller, more focused test cases. This would improve maintainability and make it easier to understand each test's purpose.
370-371
: Implement the TODO forblocks_get_all_between
testThe test for
blocks_get_all_between
works, but the TODO suggests that more comprehensive testing is needed. Consider adding test cases with different range parameters to fully validate this functionality.Do you want me to suggest additional test cases for the
blocks_get_all_between
method?
385-387
: Address TODOs for epoch range and VN key testsThere are TODO comments for testing with a greater epoch range and with a specific validator node key. Implementing these tests would improve coverage of the filtering functionality.
Do you want me to generate implementations for these test cases?
dan_layer/storage_tests/src/pending_state_tree_diff.rs (2)
45-47
: Add assertions to verify successful insertion of pending state tree diffThe test inserts a state tree diff but doesn't explicitly verify the insertion succeeded before testing the retrieval. Consider adding an assertion immediately after insertion to confirm success.
tx.pending_state_tree_diffs_insert(*block_2.id(), shard, &diff).unwrap(); +// Verify diff was inserted successfully +let inserted_diffs = tx.pending_state_tree_diffs_get_all_up_to_commit_block(block_2.id()).unwrap(); +assert_eq!(inserted_diffs.len(), 1);
26-66
: Enhance test coverage for pending state tree diff operationsThe test only covers the basic happy path for pending state tree diffs. Consider adding tests for:
- Error handling when inserting invalid diffs
- Multiple diffs for the same block
- Boundary conditions (e.g., empty diffs)
- Concurrent modifications
These additional test cases would provide more comprehensive validation of the pending state tree diff functionality.
dan_layer/storage_tests/src/votes.rs (2)
75-83
: Enhance randomness increate_random_vote
functionThe
create_random_vote
function uses fixed values for most fields except for the block ID, sender leaf hash, and signature. Enhancing the randomness by varying other fields likedecision
would make the tests more robust against potential implementation biases.fn create_random_vote() -> Vote { + use rand::{thread_rng, Rng}; + let rng = thread_rng(); + let decision = if rng.gen_bool(0.5) { + QuorumDecision::Accept + } else { + QuorumDecision::Reject + }; + Vote { - epoch: Epoch::zero(), + epoch: Epoch(rng.gen_range(0..10)), block_id: create_random_block_id(), - decision: QuorumDecision::Accept, + decision, sender_leaf_hash: create_random_hash(), signature: create_random_vn_signature(), } }
33-73
: Add tests for additional vote scenariosThe current tests only verify basic CRUD operations with one vote per block. Consider adding tests for:
- Multiple votes for the same block
- Votes with different decisions (Accept/Reject)
- Edge cases like retrieving votes for non-existent blocks
These additional test cases would provide more comprehensive validation of the vote functionality.
dan_layer/storage_tests/src/transactions.rs (3)
156-161
: Add a comment explaining why SQLite test is ignoredThe SQLite test for transaction operations is marked with
#[ignore]
but there's no comment explaining why, unlike in blocks.rs. Add a comment to clarify the reason for ignoring this test.#[ignore] +// TODO: sqlite fails due to missing foreign key values or other constraints #[test] fn transaction_operations_sqlite() { let db = create_sqlite(); transaction_operations(db); }
277-282
: Add a comment explaining why SQLite test is ignoredSimilar to the previous comment, the SQLite test for transaction execution operations is marked with
#[ignore]
without an explanatory comment.#[ignore] +// TODO: sqlite fails due to missing foreign key values or other constraints #[test] fn transaction_execution_operations_sqlite() { let db = create_sqlite(); transaction_execution_operations(db); }
290-359
: Enhance test coverage for transaction execution operationsThe transaction execution test is focused on happy path scenarios. Consider adding tests for:
- Error handling (e.g., attempting to insert duplicate executions)
- Edge cases (e.g., transactions with failed execution)
- Transaction finalization with varying conditions
These additional test cases would provide more comprehensive validation of the transaction execution functionality.
dan_layer/state_store_rocksdb/src/model/transaction.rs (1)
50-56
: Consider using a more functional approach for result collection.The result collection could be made more concise using a functional approach.
- let mut res = vec![]; - for value in values.into_iter().flatten() { - let rec = Self::decode(value)?; - res.push(rec); - } - - Ok(res) + values + .into_iter() + .flatten() + .map(|value| Self::decode(value)) + .collect()dan_layer/state_store_rocksdb/src/model/parked_block.rs (1)
36-44
: Consider optimizing the key generation.The
key_from_block_id
method converts the block ID to a string and then callskey_from_block_id_str
. Depending on how expensive theto_string()
operation is, this could potentially be optimized.If the BlockId's ToString implementation is costly, consider a more direct approach:
- pub fn key_from_block_id(block_id: &BlockId) -> String { - Self::key_from_block_id_str(&block_id.to_string()) - } + pub fn key_from_block_id(block_id: &BlockId) -> String { + format!("{}_{}", Self::key_prefix(), block_id) + }This would require that BlockId implements Display, which is likely the case if to_string() works.
dan_layer/state_store_rocksdb/src/model/leaf_block.rs (2)
29-34
: Consider adding documentation for the LeafBlockData struct.The
LeafBlockData
struct has a comment explaining the purpose of thecreated_at
field, but this would be better as proper documentation using///
doc comments. Additionally, documenting the entire struct would improve maintainability and clarity.#[derive(Debug, Clone, Serialize, Deserialize)] +/// Represents leaf block data stored in RocksDB pub struct LeafBlockData { pub leaf_block: LeafBlock, - // we need this field to keep track of insertion order + /// We need this field to keep track of insertion order pub created_at: RocksdbTimestamp, }
45-51
: Add documentation for LeafBlockModel and its methods.The purpose and usage of the
LeafBlockModel
struct and itskey_prefix_by_epoch
method aren't immediately clear without documentation. Consider adding doc comments to explain their roles.+/// Model for storing and retrieving leaf blocks in RocksDB pub struct LeafBlockModel {} impl LeafBlockModel { + /// Generates a key prefix for querying leaf blocks by epoch pub fn key_prefix_by_epoch(epoch: Epoch) -> String { format!("{}_{}", Self::key_prefix(), epoch) } }dan_layer/storage_tests/src/foreign_substate_pledges.rs (4)
14-19
: Add documentation for test functions.The test function lacks documentation explaining its purpose, test scenario, and expected outcomes. This documentation would improve maintainability and help other developers understand the test coverage.
+ /// Tests foreign substate pledge operations using SQLite as the backing database. + /// Verifies that pledges can be saved, retrieved, and removed correctly. #[test] fn foreign_substate_pledges_sqlite() { let db = create_sqlite(); db.foreign_keys_off().unwrap(); foreign_substate_pledges_operations(db); }
21-25
: Add documentation for test functions.Similar to the SQLite test, this RocksDB test function would benefit from proper documentation.
+ /// Tests foreign substate pledge operations using RocksDB as the backing database. + /// Verifies that pledges can be saved, retrieved, and removed correctly. #[test] fn foreign_substate_pledges_rocksdb() { let db = create_rocksdb(); foreign_substate_pledges_operations(db); }
27-49
: Enhance test coverage and improve error handling.The test function has a few areas for improvement:
- It only tests the count of results, not their content
- It rolls back the transaction at the end, which means persistence isn't tested
- No error cases are tested (e.g., saving duplicates, retrieving non-existent pledges)
Consider adding additional tests or enhancing this one to cover these scenarios.
fn foreign_substate_pledges_operations(db: impl StateStore) { let mut tx = db.create_write_tx().unwrap(); let transaction_id = TransactionId::default(); let shard_group = ShardGroup::all_shards(NumPreshards::P1); let pledge_1 = build_substate_pledge(); let pledge_2 = build_substate_pledge(); tx.foreign_substate_pledges_save(&transaction_id, shard_group, &vec![pledge_1.clone(), pledge_2]) .unwrap(); let res = tx .foreign_substate_pledges_get_all_by_transaction_id(&transaction_id) .unwrap(); assert_eq!(res.len(), 2); + // Verify the actual content matches what was saved + assert!(res.iter().any(|p| matches!(p, SubstatePledge::Input { substate_id, .. } if substate_id == &pledge_1.substate_id()))); tx.foreign_substate_pledges_remove_many(vec![&transaction_id]).unwrap(); let res = tx .foreign_substate_pledges_get_all_by_transaction_id(&transaction_id) .unwrap(); assert_eq!(res.len(), 0); tx.rollback().unwrap(); + + // Additional test for persistence + let mut tx2 = db.create_write_tx().unwrap(); + tx2.foreign_substate_pledges_save(&transaction_id, shard_group, &vec![pledge_1.clone()]) + .unwrap(); + tx2.commit().unwrap(); + + // Verify data persists after commit + let tx3 = db.create_read_tx().unwrap(); + let res = tx3.foreign_substate_pledges_get_all_by_transaction_id(&transaction_id).unwrap(); + assert_eq!(res.len(), 1); }
51-57
: Improve test data generation.The
build_substate_pledge
function creates fairly simplistic test data. For more comprehensive testing, consider:
- Adding parameters to create more varied test data
- Creating builder pattern for more flexible test data construction
- Including test cases for different types of pledges
- fn build_substate_pledge() -> SubstatePledge { + /// Builds a test substate pledge with customizable parameters + fn build_substate_pledge(is_write: bool) -> SubstatePledge { SubstatePledge::Input { substate_id: VersionedSubstateId::new(create_random_substate_id(), 0), - is_write: false, + is_write, substate: build_substate_value(None), } }dan_layer/state_store_rocksdb/src/model/high_qc.rs (2)
36-43
: Avoid cloning unnecessarily in the From implementation.The
From
implementation forHighQcModelData
unnecessarily clones theHighQc
. Since you're already working with a reference, consider dereferencing the value instead of cloning it, especially ifHighQc
is large.impl From<&HighQc> for HighQcModelData { fn from(value: &HighQc) -> Self { Self { - high_qc: value.clone(), + high_qc: *value, // Dereference instead of clone if HighQc implements Copy created_at: RocksdbTimestamp::now(), } } }If
HighQc
does not implement theCopy
trait, you'll need to keep the clone, but consider implementingCopy
forHighQc
if it makes sense for your use case.
29-34
: Add proper documentation for public structs and fields.Similar to the leaf_block.rs file, the
HighQcModelData
struct would benefit from proper documentation using doc comments. This would improve maintainability and help other developers understand its purpose.#[derive(Debug, Clone, Serialize, Deserialize)] +/// Represents high quorum certificate data stored in RocksDB pub struct HighQcModelData { pub high_qc: HighQc, - // we need this field to keep track of insertion order + /// Used to track insertion order for deterministic retrieval pub created_at: RocksdbTimestamp, }dan_layer/state_store_rocksdb/src/model/last_voted.rs (3)
80-80
: Consider error message enhancement for put operation.The current implementation passes the operation to the column family's put method but doesn't provide a specific error message. This makes debugging more difficult if an error occurs.
- TimestampColumnFamily::put(db.clone(), tx, operation, value, main_key_bytes)?; + TimestampColumnFamily::put(db.clone(), tx, operation, value, main_key_bytes) + .map_err(|e| RocksDbStorageError::ColumnFamilyError { + operation: operation.to_string(), + cf_name: TimestampColumnFamily::name().to_string(), + source: Box::new(e), + })?;
39-46
: Avoid unnecessary cloning in the From implementation.Similar to other model files, the
From
implementation forLastVotedModelData
unnecessarily clones theLastVoted
value. If possible, consider either dereferencing (ifLastVoted
implementsCopy
) or implementing a more efficient conversion method.impl From<&LastVoted> for LastVotedModelData { fn from(value: &LastVoted) -> Self { Self { - last_voted: value.clone(), + last_voted: *value, // If LastVoted implements Copy created_at: RocksdbTimestamp::now(), } } }
106-109
: Consider adding a prefix to timestamp keys.The
build_key
method inTimestampColumnFamily
uses just the key prefix and timestamp, which could potentially collide with other keys if they happen to have the same format. Consider adding a specific prefix or delimiter for timestamp column family keys.fn build_key(value: &Self::Item) -> String { // the key segment for "created_at" allows us to order by creation time - format!("{}_{}", LastVotedModel::key_prefix(), value.created_at) + format!("ts_{}_{}", LastVotedModel::key_prefix(), value.created_at) }dan_layer/state_store_rocksdb/src/model/substate.rs (1)
76-78
: Use a typed error instead of a string.Returning a plain string in the
TryFrom<SubstateModelData> for SubstateRecord
implementation can obscure error-handling paths. Consider introducing a more structured error type (e.g.SubstateError
) for clearer handling.- type Error = String; + #[derive(Debug, thiserror::Error)] + enum SubstateConversionError { + #[error("Deserialization error: {0}")] + Deserialization(String), + }dan_layer/state_store_rocksdb/src/model/foreign_substate_pledge.rs (1)
48-50
: Guard against potential key collisions with an empty substate address.When
substate_address_opt
isNone
, the resulting key includes an extra underscore. Although this might be intentional, consider using a more robust delimiter or clarifying how empty addresses are handled to minimize potential key collisions or confusion.dan_layer/state_store_rocksdb/src/model/substate_locks.rs (1)
59-60
: Validate underscore usage in key formatting.Keys rely on underscore separators for fields like
substate_id
,block_id
, andtransaction_id
. If these fields contain underscores or unusual characters, the current format could complicate parsing. Consider stricter delimiters or escaping.dan_layer/state_store_rocksdb/src/model/transaction_pool.rs (1)
36-71
: Review the necessity of aResult
return type.The code in
try_convert
never actually returns an error, making theResult
type optional. Unless you plan to handle errors here in future expansions, consider returning theTransactionPoolRecord
directly to simplify the API.dan_layer/state_store_rocksdb/src/model/locked_block.rs (1)
53-68
: Potential key collisions due to timestamp resolution.
Sincekey
relies oncreated_at
for uniqueness, collisions can occur if multiple locked blocks happen to share the same timestamp resolution. If guaranteeing unique keys is critical, consider using an additional sequence or a higher-resolution timestamp.dan_layer/state_store_rocksdb/src/store.rs (3)
30-30
: Use more direct logging macros.
The usage oflog::log
is flexible, but you might consider using the provided macros (log::trace!
,log::warn!
, etc.) directly for clearer code readability and easier filtering by logging level.
86-95
: Confirm column family creation strategy.
Creating multiple column families here is correct for partitioning data, but spans a fairly large array. Ensure that each model references a unique CF name to avoid confusion, and verify that removing or deprecating a model is safe when the CF is no longer needed.
125-140
: Evaluate the 1-second warning threshold.
Logging a warning after 1 second could be too conservative or too lenient depending on your system's performance profile. Re-visiting this threshold may help avoid spurious warnings in production or ignoring potential slowdowns.dan_layer/state_store_rocksdb/src/model/foreign_send_counter.rs (2)
49-50
: Inconsistent trailing underscore in prefix vs. key usage.
key_prefix_by_block_id
includes a trailing underscore (format!("{}_{}_", ...)
), whereaskey
omits this trailing underscore. If they're intended to be used together, consider normalizing their string format to avoid confusion.
54-64
: Validate potential collisions due to timestamp-based keys.
Similar to other models, the key generation includes a timestamp susceptible to collisions at a given resolution. If guaranteed uniqueness is required, consider combining the timestamp with a monotonic counter or other unique identifier.dan_layer/state_store_rocksdb/src/utils.rs (2)
39-42
: Consider taking a reference to the byte array instead of by valueThe
bincode_decode
function takes ownership of the input bytes vector, which could be inefficient for large payloads as it requires copying the entire vector. Consider modifying it to take a reference (&[u8]
) instead, similar to howbor_decode
is implemented.-pub fn bincode_decode<T: DeserializeOwned>(bytes: Vec<u8>) -> Result<T, RocksDbStorageError> { - let (value, _): (T, usize) = bincode::serde::decode_from_slice(&bytes, BINCODE_CONFIG)?; +pub fn bincode_decode<T: DeserializeOwned>(bytes: &[u8]) -> Result<T, RocksDbStorageError> { + let (value, _): (T, usize) = bincode::serde::decode_from_slice(bytes, BINCODE_CONFIG)?; Ok(value) }
77-78
: Fix typographical error in commentThere's a typo in the comment: "endcoding" should be "encoding".
- // hexadecimal endcoding with full 0 padding, so the key preserves ordering + // hexadecimal encoding with full 0 padding, so the key preserves orderingdan_layer/storage/src/consensus_models/transaction_pool.rs (1)
610-613
: Redundant setter methods could cause confusionThe newly added
set_is_ready()
method duplicates functionality already provided by the existingset_ready()
method. Having multiple methods that do the same thing can lead to inconsistent usage patterns and confusion for developers.Consider either:
- Deprecating one of the methods
- Removing the duplicate method
- Making one method call the other to maintain consistency
pub fn set_is_ready(&mut self, is_ready: bool) -> &mut Self { - self.is_ready = is_ready; - self + // Delegate to existing method for consistency + self.set_ready(is_ready) }Also applies to: 668-671
dan_layer/state_store_rocksdb/src/model/quorum_certificate.rs (1)
83-86
: Strong coupling with QuorumCertificateModel
BlockColumnFamily
directly depends onQuorumCertificateModel::key_prefix()
, creating tight coupling between these classes. If the prefix forQuorumCertificateModel
changes, it would affectBlockColumnFamily
as well.Consider using a more decoupled approach, either by:
- Defining the prefix as a constant that both classes can reference
- Passing the prefix as a parameter
- Having
BlockColumnFamily
define its own independent prefixpub fn key_from_block_id(block_id: &BlockId) -> String { - format!("{}_{}", QuorumCertificateModel::key_prefix(), block_id) + // Option 1: Using a shared constant + format!("{}_{}", "quorumcertificates", block_id) }dan_layer/state_store_rocksdb/src/model/pending_state_tree_diff.rs (1)
47-54
: Consider distinct key formats for with/without heightThe
key_from_block_str_and_height
method always includes a height component in the key format, using an empty string when no height is provided. This could make key parsing ambiguous.Consider using distinct key formats when height is present vs. absent:
pub fn key_from_block_str_and_height(block_id: &str, height: Option<&NodeHeight>) -> String { - let height = height - .map(|h| - // RocksdbSeq allows us to properly order keys by height - RocksdbSeq(h.as_u64()).to_string()) - .unwrap_or_default(); - format!("{}_{}_{}", Self::key_prefix(), block_id, height) + match height { + Some(h) => { + // RocksdbSeq allows us to properly order keys by height + let height_str = RocksdbSeq(h.as_u64()).to_string(); + format!("{}_{}_height_{}", Self::key_prefix(), block_id, height_str) + } + None => format!("{}_{}", Self::key_prefix(), block_id) + } }dan_layer/state_store_rocksdb/src/model/block_diff.rs (2)
42-47
: Consider renamingload
to clarify its functionality.
The method nameload
might be slightly ambiguous. A more descriptive name liketo_block_diff
orinto_block_diff
can improve readability, ensuring it's clear that this function constructs aBlockDiff
from the provided data.
64-92
: Potential key format collisions.
Using underscores to separate fields in key strings can lead to ambiguous parsing if the inner fields happen to contain underscores themselves. Consider serializing each field (block_id, substate_id, created_at) in hex or a similarly disjoint encoding to avoid collisions.A possible snippet to ensure safe key encoding could be:
fn key(value: &Self::Item) -> String { - format!("{}_{}_{}_{}", Self::key_prefix(), value.block_id, value.substate_id, value.created_at) + let block_id_hex = hex::encode(value.block_id.as_bytes()); + let substate_id_hex = hex::encode(value.substate_id.to_bytes()); + let timestamp = value.created_at; // or any string + format!("{}|{}|{}|{}", Self::key_prefix(), block_id_hex, substate_id_hex, timestamp) }dan_layer/storage_tests/src/helper.rs (2)
68-78
: Expand coverage increate_tx_atom
.
Currently, this function defaults all transactions toDecision::Commit
. Varying the decision could improve test coverage of rollback or abort scenarios.
145-180
: Consider replacingeprintln!
with logger statements.
eprintln!
might be too noisy during tests. Logging can be toggled or captured more flexibly. Overall, the block creation logic is correct.dan_layer/state_store_rocksdb/src/model/state_transition.rs (2)
76-89
: Verify error handling for failures input_in_cfs
.
Insideput_in_cfs
,ShardColumnFamily::put
is invoked, but errors from that call are surfaced asRocksDbStorageError
without further context. Consider adding specific error tracing or contextual logs to assist in debugging DB write issues (e.g., which shard or ID caused the error).
91-101
: Add consistency checks after CF deletion.
Similar toput_in_cfs
, thedelete_from_cfs
logic delegates toShardColumnFamily::delete
. If the CF or the key is missing, the code doesn't log or handle it specially. Consider verifying that the delete operation indeed succeeded (or that the key existed) to avoid silent failures.dan_layer/state_store_rocksdb/src/model/lock_conflict.rs (2)
52-59
: Validate key format for conflict data retrieval.
Theformat!("{}_{}_{}", Self::key_prefix(), conflict.transaction_id, created_at)
approach is readable, but it might lead to collisions iftransaction_id
orcreated_at
are not unique enough. Confirm that these fields guarantee uniqueness or add additional context if needed.
65-78
: Consider adding logging for conflict insertions.
When conflicts are stored in the CF withBlockIdColumnFamily::put
, it might be helpful to log the conflict details or the operation for auditing.dan_layer/state_store_rocksdb/src/model/block.rs (2)
156-170
: Validate partial updates to CFs.
put_in_cfs
writes to three CFs independently. If one CF operation fails, the others might succeed, leading to an inconsistent partial update. Consider a transactional approach or a rollback strategy if any single CF operation fails.
240-278
: Use consistent naming for "is_committed" vs. "is_commited".
Minor spelling mismatch in the comment “Column family for ‘is_commited’” vs. the constant name "blocks_is_committed". Keeping them aligned promotes clarity and consistent references.- // Column family for "is_commited" + // Column family for "is_committed"dan_layer/state_store_rocksdb/src/model/evicted_node.rs (2)
34-40
: Typographical Correction in Field Name
There's a small misspelling in the field nameeviction_commmited_in_epoch
(with three 'm's). Consider renaming this toeviction_committed_in_epoch
for clarity and consistency.- pub eviction_commmited_in_epoch: Option<Epoch>, + pub eviction_committed_in_epoch: Option<Epoch>,
95-125
: Inconsistent Naming Convention
The struct and field naming use "EvictionCommitted" but the data model struct has a field spelled "commmited" (or "committed"). For overall consistency, consider aligning field names inEvictedNodeData
with theEvictionCommittedColumnFamily
struct, so the terminology "committed" is spelled consistently.dan_layer/state_store_rocksdb/src/model/block_transaction_execution.rs (1)
71-77
: Clarify Key Segments
The key format combines transaction ID, block ID, and the creation timestamp. Ensure that the ordering of these segments is correct for your intended prefix scanning strategy. Consider documenting in code comments how scanning or ordering is intended to work using these segments.dan_layer/state_store_rocksdb/src/model/foreign_proposal.rs (1)
163-163
: Typographical Error in Comment
The comment says "not comfirmed yet." Consider correcting to "not confirmed yet."-// CF to query proposals that are not comfirmed yet +// CF to query proposals that are not confirmed yetdan_layer/state_store_rocksdb/src/reader.rs (2)
205-237
: Consider optimizing the iteration logic.
Withinget_ranked_transaction_atom_updates
, the code iterates over all block IDs, retrieves records, and performs a second filter on the returned data. This can be expensive when the number of blocks grows large. To improve performance, consider adding indexes or employing range queries that limit the scanning range to only relevant items.
883-894
: Possible height limit misalignment inblocks_get_parent_chain
.
When collecting the parent blocks, this loop stops at genesis or after hitting the limit. If a chain is unexpectedly longer, it might terminate early without fully verifying the entire path. Ensure that the calling code can handle partial results or consider adding an error if the full parent chain isn’t found within the limit.dan_layer/state_store_rocksdb/src/model/missing_transactions.rs (1)
33-38
: Consider deriving additional traits forMissingTransaction
.
#[derive(PartialEq, Eq, Hash)]
might be beneficial if you plan to manipulate these in sets or maps. This can simplify data handling, especially if deduplication or hashing is required in the future.dan_layer/state_store_rocksdb/src/writer.rs (5)
145-151
: Consider gracefully handlingNone
onself.transaction
instead of unwrapping.
Currently, if methods are called after commit/rollback,transaction.as_mut().unwrap()
will panic. Handling this more gracefully (e.g. returning an error) can improve robustness and prevent runtime panics.
274-285
: Use RocksDB write batching for bulk insertions.
A loop of single writes, as marked with theTODO
, can be slow. Consider using a batch or merge operator in RocksDB for improved performance, especially for large sets ofSubstateChange
.
899-950
: Potential large in-memory iteration when cleaning up old missing transactions.
Lines 929–945 usemulti_get_cf
and filter in memory to delete entries older than the current height. If the dataset is large, this could become a bottleneck. A more selective approach (e.g. prefix range deletions) may be more efficient.
1341-1347
: Reinstate updated record without a full delete when settingproposed_in_block
.
Currently, the code deletes and re-inserts entries to clear old column-family records. A direct update via a merge operator or partial column overwrite could be more efficient and avoid ephemeral deletion states.
1423-1425
: Unimplemented code fordiagnostics_add_no_vote
.
This method has atodo!()
. Confirm if it will be used in future iterations or if it should be removed.Do you want to open a new issue to implement diagnostics for no-vote scenarios?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (89)
Cargo.toml
(3 hunks)applications/tari_dan_app_utilities/config_presets/c_validator_node.toml
(1 hunks)applications/tari_validator_node/Cargo.toml
(1 hunks)applications/tari_validator_node/src/bootstrap.rs
(1 hunks)applications/tari_validator_node/src/config.rs
(4 hunks)applications/tari_validator_node/src/consensus/spec.rs
(1 hunks)applications/tari_validator_node/src/json_rpc/handlers.rs
(1 hunks)dan_layer/state_store_rocksdb/Cargo.toml
(1 hunks)dan_layer/state_store_rocksdb/src/error.rs
(1 hunks)dan_layer/state_store_rocksdb/src/lib.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/block.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/block_diff.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/block_transaction_execution.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/burnt_utxo.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/epoch_checkpoint.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/evicted_node.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/foreign_parked_blocks.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/foreign_proposal.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/foreign_receive_counter.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/foreign_send_counter.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/foreign_substate_pledge.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/high_qc.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/last_executed.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/last_proposed.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/last_sent_vote.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/last_voted.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/leaf_block.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/lock_conflict.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/locked_block.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/missing_transactions.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/mod.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/parked_block.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/pending_state_tree_diff.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/quorum_certificate.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/state_transition.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/state_tree.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/state_tree_shard_versions.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/substate.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/substate_locks.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/traits.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/transaction.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/transaction_pool.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/transaction_pool_state_update.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/vote.rs
(1 hunks)dan_layer/state_store_rocksdb/src/reader.rs
(1 hunks)dan_layer/state_store_rocksdb/src/store.rs
(1 hunks)dan_layer/state_store_rocksdb/src/utils.rs
(1 hunks)dan_layer/state_store_rocksdb/src/writer.rs
(1 hunks)dan_layer/state_store_sqlite/Cargo.toml
(1 hunks)dan_layer/state_store_sqlite/tests/tests.rs
(0 hunks)dan_layer/storage/src/consensus_models/block.rs
(2 hunks)dan_layer/storage/src/consensus_models/block_pledges.rs
(1 hunks)dan_layer/storage/src/consensus_models/epoch_checkpoint.rs
(2 hunks)dan_layer/storage/src/consensus_models/foreign_parked_proposal.rs
(2 hunks)dan_layer/storage/src/consensus_models/foreign_proposal.rs
(1 hunks)dan_layer/storage/src/consensus_models/foreign_receive_counters.rs
(1 hunks)dan_layer/storage/src/consensus_models/foreign_send_counters.rs
(1 hunks)dan_layer/storage/src/consensus_models/high_qc.rs
(2 hunks)dan_layer/storage/src/consensus_models/last_executed.rs
(1 hunks)dan_layer/storage/src/consensus_models/last_proposed.rs
(2 hunks)dan_layer/storage/src/consensus_models/last_sent_vote.rs
(1 hunks)dan_layer/storage/src/consensus_models/last_voted.rs
(1 hunks)dan_layer/storage/src/consensus_models/leaf_block.rs
(2 hunks)dan_layer/storage/src/consensus_models/lock_confict.rs
(1 hunks)dan_layer/storage/src/consensus_models/locked_block.rs
(2 hunks)dan_layer/storage/src/consensus_models/quorum_certificate.rs
(1 hunks)dan_layer/storage/src/consensus_models/state_transition.rs
(2 hunks)dan_layer/storage/src/consensus_models/state_tree_diff.rs
(2 hunks)dan_layer/storage/src/consensus_models/substate.rs
(4 hunks)dan_layer/storage/src/consensus_models/substate_change.rs
(1 hunks)dan_layer/storage/src/consensus_models/substate_lock.rs
(1 hunks)dan_layer/storage/src/consensus_models/transaction.rs
(2 hunks)dan_layer/storage/src/consensus_models/transaction_execution.rs
(3 hunks)dan_layer/storage/src/consensus_models/transaction_pool.rs
(5 hunks)dan_layer/storage/src/consensus_models/transaction_pool_status_update.rs
(1 hunks)dan_layer/storage_tests/Cargo.toml
(1 hunks)dan_layer/storage_tests/src/block_diffs.rs
(1 hunks)dan_layer/storage_tests/src/blocks.rs
(1 hunks)dan_layer/storage_tests/src/foreign_proposals.rs
(1 hunks)dan_layer/storage_tests/src/foreign_substate_pledges.rs
(1 hunks)dan_layer/storage_tests/src/helper.rs
(1 hunks)dan_layer/storage_tests/src/lib.rs
(1 hunks)dan_layer/storage_tests/src/misc.rs
(1 hunks)dan_layer/storage_tests/src/missing_transactions.rs
(1 hunks)dan_layer/storage_tests/src/pending_state_tree_diff.rs
(1 hunks)dan_layer/storage_tests/src/quorum_certificates.rs
(1 hunks)dan_layer/storage_tests/src/substates.rs
(1 hunks)dan_layer/storage_tests/src/transactions.rs
(1 hunks)dan_layer/storage_tests/src/votes.rs
(1 hunks)
💤 Files with no reviewable changes (1)
- dan_layer/state_store_sqlite/tests/tests.rs
✅ Files skipped from review due to trivial changes (5)
- dan_layer/state_store_sqlite/Cargo.toml
- dan_layer/state_store_rocksdb/src/lib.rs
- applications/tari_validator_node/src/json_rpc/handlers.rs
- dan_layer/storage_tests/src/lib.rs
- dan_layer/storage_tests/Cargo.toml
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: test
- GitHub Check: machete
- GitHub Check: check stable
- GitHub Check: check nightly
- GitHub Check: clippy
🔇 Additional comments (130)
applications/tari_validator_node/Cargo.toml (1)
44-44
: LGTM! New dependency adds required state tree functionalityThe addition of the
tari_state_tree
dependency aligns with the PR's objective to implement a new state store using RocksDB.dan_layer/storage/src/consensus_models/foreign_proposal.rs (1)
169-169
: LGTM! Added equality comparison traitsAdding
Eq
andPartialEq
traits toForeignProposalStatus
is appropriate for enabling equality comparisons, which will likely be needed for the RocksDB implementation.dan_layer/storage/src/consensus_models/locked_block.rs (1)
6-6
: LGTM! Added serialization capabilitiesAdding
Serialize
andDeserialize
traits to theLockedBlock
struct is consistent with the requirements for the new RocksDB state store implementation, enabling proper serialization and deserialization of the data.Also applies to: 16-16
applications/tari_dan_app_utilities/config_presets/c_validator_node.toml (1)
27-28
: Configuration for database type added.The new configuration option allows users to choose between RocksDB and SQLite databases, which aligns with the stated PR objective of introducing a new RocksDB implementation. Having this commented out by default ensures backward compatibility by defaulting to SQLite.
dan_layer/storage/src/consensus_models/foreign_parked_proposal.rs (2)
6-6
: LGTM: Adding serialization imports.The addition of serde imports is necessary to enable serialization and deserialization of models for the RocksDB implementation.
16-16
: Adding serialization traits to ForeignParkedProposal.Adding Serialize and Deserialize traits is appropriate for enabling the struct to be stored and retrieved from RocksDB.
dan_layer/storage/src/consensus_models/last_voted.rs (2)
4-4
: LGTM: Adding serialization imports.The addition of serde imports is necessary to enable serialization and deserialization of models for the RocksDB implementation.
9-9
: Adding serialization traits to LastVoted.Adding Serialize and Deserialize traits is appropriate for enabling the struct to be stored and retrieved from RocksDB.
dan_layer/storage/src/consensus_models/substate_lock.rs (2)
6-6
: LGTM: Adding serialization imports.The addition of serde imports is necessary to enable serialization and deserialization of models for the RocksDB implementation.
13-13
: Adding serialization traits to SubstateLock.Adding Serialize and Deserialize traits is appropriate for a struct that needs to be stored in RocksDB. This maintains consistency with the serialization pattern applied to other structs in the codebase.
dan_layer/storage/src/consensus_models/foreign_send_counters.rs (1)
6-6
: Serialization support added correctly.The addition of serde's Serialize and Deserialize traits to the ForeignSendCounters struct is appropriate for the new RocksDB implementation, which will need to serialize/deserialize these models.
Also applies to: 12-12
dan_layer/storage/src/consensus_models/last_proposed.rs (1)
4-4
: LGTM - Serialization traits properly added.The addition of Serialize and Deserialize traits aligns well with the PR objective of implementing a RocksDB-based state store. This will allow the LastProposed struct to be properly serialized when stored in RocksDB.
Also applies to: 14-14
dan_layer/storage/src/consensus_models/leaf_block.rs (1)
25-25
: Appropriate serialization traits added.The Serialize and Deserialize traits have been correctly added to LeafBlock while preserving all existing derive traits (Debug, Clone, Copy, PartialEq, Eq). This ensures compatibility with the new RocksDB storage backend.
Also applies to: 35-35
dan_layer/storage/src/consensus_models/epoch_checkpoint.rs (1)
7-7
: Serialization support added correctly.The addition of Serialize and Deserialize traits to the EpochCheckpoint struct is consistent with changes to other models and necessary for the RocksDB implementation.
Consider whether any complex nested types within this struct (like IndexMap) might need custom serialization handling for optimal performance with RocksDB.
Would any fields in EpochCheckpoint benefit from custom serialization attributes? For example, if certain fields are large but rarely accessed, you might consider using
#[serde(skip_serializing_if = "Option::is_none")]
or similar optimizations.Also applies to: 18-18
dan_layer/storage/src/consensus_models/transaction.rs (2)
11-11
: Good serialization update.Adding the Serialize import from serde is necessary for the struct serialization enhancement below.
48-48
: LGTM: Enhanced TransactionRecord with serialization support.Adding Serialize capability to the TransactionRecord struct is a good improvement that aligns with the new RocksDB state store implementation. This will enable proper serialization for data interchange between components.
dan_layer/storage/src/consensus_models/state_tree_diff.rs (2)
10-10
: Good addition of serialization dependencies.Adding the Serialize and Deserialize imports is necessary for the struct serialization enhancement below.
62-62
: LGTM: Enhanced VersionedStateHashTreeDiff with serialization support.Adding Serialize and Deserialize traits to VersionedStateHashTreeDiff enables proper serialization for storage in RocksDB.
applications/tari_validator_node/src/consensus/spec.rs (1)
43-43
: LGTM: Explicit type reference instead of using Self::Addr.Changing from
SqliteStateStore<Self::Addr>
toSqliteStateStore<PeerAddress>
makes the type reference more explicit. SinceSelf::Addr
is defined asPeerAddress
on line 33, this is functionally equivalent but provides better clarity.dan_layer/storage/src/consensus_models/foreign_receive_counters.rs (2)
6-6
: Good addition of serialization dependencies.Adding the Serialize and Deserialize imports is necessary for the struct serialization enhancement below.
11-11
: LGTM: Enhanced ForeignReceiveCounters with serialization support.Adding Serialize and Deserialize traits to ForeignReceiveCounters aligns with the broader effort to make consensus models serializable for the new RocksDB implementation.
dan_layer/storage/src/consensus_models/block_pledges.rs (1)
108-108
: Adding serialization support for RocksDB integration looks good.Adding
Serialize
andDeserialize
traits toSubstatePledge
enables proper serialization for the new RocksDB state store implementation. This is consistent with the PR's objective and other model changes.dan_layer/storage/src/consensus_models/last_executed.rs (1)
4-4
: Appropriate serialization support added for RocksDB integration.Adding the serde import and deriving
Serialize
andDeserialize
for theLastExecuted
struct enables proper serialization for the RocksDB state store implementation.Also applies to: 9-9
dan_layer/storage/src/consensus_models/substate_change.rs (1)
6-6
: Serialization traits properly added for RocksDB compatibility.The addition of
Serialize
andDeserialize
traits for theSubstateChange
enum enables proper serialization for storage in RocksDB, aligning with the overall PR objective to implement a RocksDB-based state store.Also applies to: 12-12
dan_layer/storage/src/consensus_models/last_sent_vote.rs (1)
4-4
: Serialization support correctly added for RocksDB state store.The addition of serde import and
Serialize
/Deserialize
derivation for theLastSentVote
struct is consistent with the changes made to other models and supports the new RocksDB state store implementation.Also applies to: 10-10
applications/tari_validator_node/src/bootstrap.rs (1)
184-185
: Ensure thatsidechain_id
integrity is addressed.This initialization line can potentially cause errors later in
bootstrap_state
orcreate_registration_file
ifsidechain_id
is empty or invalid. Consider performing validation or safely logging any unexpected states before usage.dan_layer/storage/src/consensus_models/high_qc.rs (1)
25-25
: Validate deserialization inputs.Adding
Deserialize
here is fine, but ensure that external callers do not feed untrusted or malformed data to these structs. You may want to confirm that no downstream logic assumes sanitized/validated data.Also applies to: 35-35
dan_layer/storage/src/consensus_models/transaction_pool_status_update.rs (1)
4-4
: Confirm thatDeserialize
usage is secure.The introduction of
Serialize
andDeserialize
is typically safe, but confirm that external inputs won't lead to data integrity issues. Consider adding runtime checks if these structs are exposed to untrusted data streams.Also applies to: 12-12
dan_layer/storage/src/consensus_models/lock_confict.rs (1)
4-4
: LGTM: Added serialization support forLockConflict
The addition of Serde's
Serialize
andDeserialize
traits to theLockConflict
struct is appropriate and aligns with the PR objective of implementing a new RocksDB state store. This change enables the struct to be properly serialized and deserialized for storage in RocksDB.Also applies to: 8-8
dan_layer/state_store_rocksdb/src/error.rs (2)
27-50
: LGTM: Well-structured error enum with appropriate variantsThe
RocksDbStorageError
enum is well-designed with comprehensive error variants and clear error messages for different failure scenarios. The use ofthiserror
for deriving theError
trait is a good practice.
79-83
: LGTM: Clean implementation of theIsNotFoundError
traitThe implementation of
IsNotFoundError
forRocksDbStorageError
is clean and correctly identifies when an error is a "not found" error using pattern matching.dan_layer/state_store_rocksdb/src/model/last_sent_vote.rs (1)
27-41
: LGTM: Well-implementedRocksdbModel
forLastSentVote
The implementation of
RocksdbModel
forLastSentVoteModel
is clean and follows the pattern established for other models. The timestamp-based key generation approach makes sense for this use case where you only need to retrieve the most recent entry.dan_layer/storage/src/consensus_models/transaction_execution.rs (1)
6-6
: LGTM: Added serialization support for transaction execution modelsThe addition of Serde's
Serialize
andDeserialize
traits to bothTransactionExecution
andBlockTransactionExecution
structs is appropriate and consistent with the other model changes in this PR. This enables proper serialization and deserialization for the new RocksDB state store.Also applies to: 18-18, 117-117
dan_layer/storage_tests/src/foreign_proposals.rs (1)
22-24
: LGTM, but check if foreign keys setting is necessary for RocksDB.The SQLite test disables foreign keys with
db.foreign_keys_off().unwrap()
, but the RocksDB test doesn't. This might be intentional if RocksDB handles constraints differently, but verify this is expected behavior.dan_layer/storage/src/consensus_models/state_transition.rs (3)
10-10
: LGTM - Adding serialization support for RocksDB storage.Adding serde imports and derives for Serialize/Deserialize is appropriate for the RocksDB implementation, which needs to serialize/deserialize these structures.
15-15
: LGTM - StateTransition now properly serializable.Adding Serialize and Deserialize traits to StateTransition is appropriate for the RocksDB implementation.
45-45
: LGTM - StateTransitionId now properly serializable.Adding Serialize and Deserialize traits to StateTransitionId is appropriate for the RocksDB implementation.
Cargo.toml (3)
34-34
: New workspace member for RocksDB state store implementation.The addition of "dan_layer/state_store_rocksdb" to workspace members aligns with the PR objective of introducing a new crate for RocksDB implementation of consensus state store traits.
39-39
: New storage tests workspace member.Adding "dan_layer/storage_tests" as a workspace member is a good approach for providing common database operation tests that can be used by both SQLite and RocksDB implementations.
102-102
: New RocksDB state store dependency.This adds a workspace dependency for the new RocksDB implementation, making it available throughout the project.
dan_layer/storage/src/consensus_models/block.rs (3)
446-448
: New setter method for block_time.This adds the ability to modify the block_time field, which is needed for the RocksDB implementation.
486-488
: New setter method for is_justified flag.This setter allows modifying the is_justified field after block creation, which is necessary for the new RocksDB state store implementation.
490-492
: New setter method for is_committed flag.This setter allows modifying the is_committed field after block creation, which is necessary for the new RocksDB state store implementation.
applications/tari_validator_node/src/config.rs (5)
89-89
: New database_type configuration field.This adds the ability to configure which database backend to use, enhancing configurability of the validator node.
141-141
: Default to SQLite for backward compatibility.Setting the default database type to SQLite maintains backward compatibility, which aligns with the PR objectives.
165-170
: New DatabaseType enum with SQLite and RocksDB variants.This enum provides a clean, type-safe way to represent the supported database types, with SQLite as the default.
172-182
: FromStr implementation for DatabaseType.The implementation correctly handles case-insensitive parsing and provides clear error messages for invalid inputs.
184-191
: Display implementation for DatabaseType.This implementation provides consistent lowercase string representations for the database types, useful for configuration file generation and logging.
dan_layer/storage/src/consensus_models/substate.rs (5)
346-346
: Added serialization traits to SubstateCreatedProof.Adding Serialize and Deserialize traits is necessary for storing this data in RocksDB, which requires serialization.
352-352
: Added serialization traits to SubstateDestroyedProof.Adding Serialize and Deserialize traits is necessary for storing this data in RocksDB, which requires serialization.
366-366
: Added serialization traits to SubstateValueOrHash enum.Adding Serialize and Deserialize traits is necessary for storing this data in RocksDB, which requires serialization.
414-414
: Added serialization traits to SubstateData.Adding Serialize and Deserialize traits is necessary for storing this data in RocksDB, which requires serialization.
454-454
: Added serialization traits to SubstateUpdate enum.Adding Serialize and Deserialize traits is necessary for storing this data in RocksDB, which requires serialization.
dan_layer/state_store_rocksdb/src/model/foreign_parked_blocks.rs (1)
35-45
:✅ Verification successful
Verify consistent error handling.
The implementation doesn't include any error handling. If the RocksDB operations might fail (e.g., serialization issues), consider ensuring that appropriate error handling exists in the transaction implementation that uses this model.
🏁 Script executed:
#!/bin/bash # Check the RocksDB transaction implementations for error handling echo "Checking error handling in RocksDB transaction implementations:" rg -A 5 "RocksDbStateStore.*Transaction" --glob "*.rs" || echo "No transaction implementations found" echo "Checking for error types and handling:" rg -A 2 "enum.*Error" --glob "*.rs" || echo "No specific error types found"Length of output: 37380
Action: Error Handling in ForeignParkedBlockModel Confirmed
The key generation methods in the
ForeignParkedBlockModel
implementation are inherently infallible. Our inspection of the transaction implementations shows that any potential errors (e.g., serialization failures during RocksDB operations) are consistently handled at the transaction layer using the appropriate error types. No additional error handling is required directly within this model.
- The
key_prefix
andkey
functions do not perform fallible operations.- Transaction implementations (both read and write) already propagate errors via
StorageError
and related error types.- This design is consistent with the rest of the codebase.
dan_layer/storage_tests/src/substates.rs (1)
35-41
: Inconsistent test execution pattern noticed.The SQLite test is marked with
#[ignore]
which means it won't run by default during test execution, while the RocksDB test is enabled. This seems inconsistent with other test files in this PR where both database backends are enabled.Is this intentional, or should both tests be enabled for consistent test coverage?
dan_layer/storage_tests/src/blocks.rs (1)
20-27
: Clarify reasoning for ignoring SQLite test despite foreign keys being disabledThere's a comment indicating that the test is ignored due to missing foreign key values, but in line 25 foreign keys are explicitly disabled with
db.foreign_keys_off()
. This seems contradictory - if foreign key constraints are disabled, why is the test still ignored?Could you clarify whether there are additional reasons for the test failure beyond foreign key constraints?
dan_layer/state_store_rocksdb/src/model/state_tree_shard_versions.rs (3)
29-33
: LGTM! Clean data model design.The
StateTreeShardVersionModelData
struct is well-designed with appropriate fields and proper derives for serialization/deserialization.
37-41
: Good key generation approach.The
key_from_shard
method follows the established pattern of combining a prefix with the shard identifier.
43-53
: RocksdbModel implementation looks correct.The implementation provides all the necessary components: associated type definition, key prefix method, and key generation method based on the data model.
dan_layer/state_store_rocksdb/src/model/transaction.rs (3)
32-35
: Key generation approach is consistent.The key generation method follows the established pattern of concatenating the prefix with the transaction ID.
42-48
: Efficient batch retrieval with good error handling.The use of
multi_get
is efficient for batch operations, and the error handling appropriately maps RocksDB errors to the custom error type.
37-57
:✅ Verification successful
Verify that the decode method is accessible.
The
get_any
method referencesSelf::decode()
, but this method isn't defined in this file. Please ensure that this method is properly defined in theRocksdbModel
trait or elsewhere in the codebase.
🏁 Script executed:
#!/bin/bash # Search for the decode method in RocksdbModel trait echo "Searching for decode method in RocksdbModel trait" rg "fn\s+decode" --type=rust -A 10 -B 3 dan_layer/state_store_rocksdb/src/model/traits.rs # Look for other implementations using decode echo "Looking for other implementations using decode" rg "Self::decode" --type=rust dan_layer/state_store_rocksdb/src/Length of output: 1409
Subject: Confirmation on
decode
method accessibilityThe
decode
method is defined in theRocksdbModel
trait (located indan_layer/state_store_rocksdb/src/model/traits.rs
) and is used consistently throughout the codebase, including inget_any
indan_layer/state_store_rocksdb/src/model/transaction.rs
. No additional changes are required.dan_layer/storage_tests/src/block_diffs.rs (3)
15-26
: Good test structure with implementation variants.The tests are well-structured, testing both SQLite and RocksDB implementations by reusing the same test logic, which ensures consistent behavior across database types.
31-43
: Good test setup and data preparation.The test properly sets up test data including block ID, substate ID, and version, then creates a valid
SubstateChange
for testing.
52-57
: Test properly verifies removal operation.The test correctly verifies that the
block_diffs_remove
operation removes all changes by checking that the count is zero after removal.dan_layer/state_store_rocksdb/src/model/parked_block.rs (2)
28-32
: Good data model structure.The
ParkedBlockData
struct clearly defines the relationship between a parked block and its associated foreign proposals.
46-56
: RocksdbModel implementation is proper and consistent.The implementation follows the established pattern and correctly uses the block ID for key generation.
dan_layer/state_store_rocksdb/src/model/leaf_block.rs (1)
60-62
:✅ Verification successful
The key generation could be vulnerable to collisions.
The key format depends on concatenating values with underscores. If any of these values can contain underscores themselves, it could lead to key collisions. Consider using a more robust separator or encoding method.
🏁 Script executed:
#!/bin/bash # Check if the LeafBlock or Epoch types contain underscores in their string representations # Look for LeafBlock definition and its fields rg -A 10 "struct LeafBlock" --glob "*.rs" # Look for Epoch definition rg -A 5 "struct Epoch|type Epoch|enum Epoch" --glob "*.rs"Length of output: 17322
Key Generation Collision Concern—No Action Required
After verifying the relevant types:
- The
leaf_block.epoch
is either an integer or wraps a numeric value (fromEpoch
), and its string conversion does not introduce underscores.- The
created_at
field (of typeRocksdbTimestamp
) is also rendered as a plain numeric value.- The
Self::key_prefix()
component is a controlled, likely constant string.Based on these points, the use of underscores as delimiters does not pose a collision risk given the current implementations. No changes are necessary.
dan_layer/state_store_rocksdb/src/model/last_voted.rs (1)
70-83
:✅ Verification successful
Main value storage is missing in the put_in_cfs method.
The
put_in_cfs
method only stores data in column families but doesn't appear to store the main value itself. There's a comment indicating that each CF stores a key to the main collection, but the main value storage operation is missing. Ensure this is intentional and not an oversight.
🏁 Script executed:
#!/bin/bash # Check if the main value storage happens elsewhere, such as in a parent trait implementation # Look for the RocksdbModel trait definition rg -A 20 "trait RocksdbModel" --glob "*.rs" # Look for implementations of put_in_cfs in other models rg "fn put_in_cfs" --glob "*.rs"Length of output: 2872
Clarification on Main Value Storage in
put_in_cfs
After verifying across the codebase, it appears that the current implementation in
dan_layer/state_store_rocksdb/src/model/last_voted.rs
is consistent with the overall design. Theput_in_cfs
method is intended only to store—in the corresponding column family—the key (or pointer) to the main value stored elsewhere. Other model implementations usingput_in_cfs
follow the same pattern, confirming that main value storage is handled by a separate mechanism. Please ensure that this design decision is clearly documented, so it remains evident for future maintenance.dan_layer/state_store_rocksdb/src/model/last_proposed.rs (3)
39-46
: Consider stabilizing or deferring timestamp creation.Using
RocksdbTimestamp::now()
on each conversion from&LastProposed
means each call produces a fresh timestamp, even if the underlying data hasn't changed. This could introduce inconsistencies or duplicate entries when repeatedly converting the sameLastProposed
.
66-68
: Confirm single column family sufficiency.You’re returning only
TimestampColumnFamily
fromcolumn_families()
. Verify that all necessary indexing or lookups on this model can be accomplished via timestamps alone. Otherwise, consider adding additional column families to handle other query patterns.
70-83
: Validate fully storing data in main column family.The
put_in_cfs
method inserts an entry into theTimestampColumnFamily
using the main key, but it does not appear to store the actual serialized data in another column family. Ensure that this logic is correct, or confirm whether the base trait or a parent method is handling main data storage.dan_layer/state_store_rocksdb/src/model/foreign_receive_counter.rs (2)
28-33
: Possible duplication of creation time.Similar to other models, a new timestamp is generated every time this struct is created from
ForeignReceiveCounters
. Confirm that always updating the timestamp is desired, or consider persisting an external timestamp if you need a stable insertion moment.
44-56
: Validate indexing approach.The
key_prefix
set to"foreignreceivecounters"
combined with the creation timestamp ensures ordered storage by creation time. Just confirm there are no future requirements to query by other fields (e.g. index by a specific counter ID). If so, additional column families may be required.dan_layer/state_store_rocksdb/src/model/substate.rs (2)
39-43
: SerDe mismatch workaround comment is laudable.Good documentation explaining the SerDe mismatch with
ciborium
andbincode
. Keeping the relevant link ensures greater maintainability for future debugging.
259-262
: Clarify empty address usage.When
address_opt
isNone
, you concatenate an empty string. Validate that this behavior is intentional. Consider prefixing with a delimiter (e.g."_none"
) to ensure there's no ambiguity when parsing or scanning keys.dan_layer/state_store_rocksdb/src/model/foreign_substate_pledge.rs (2)
34-39
: Well-structured data model.The
ForeignSubstatePledgeData
struct's fields are clearly defined and should work well with serialization. Good job keeping it straightforward.
66-69
: Ensure exhaustive testing of BOR encoding/decoding.BOR is used to avoid conflicts, and no immediate issues are evident. However, consider adding unit tests or property-based tests to verify the round-trip encoding/decoding of values (particularly
SubstatePledge
) to catch any edge cases.dan_layer/state_store_rocksdb/src/model/substate_locks.rs (2)
66-80
: Confirm consistency of references in column families.Storing the main key in multiple column families is a flexible approach. However, ensure you have adequate tests covering insertion and retrieval from these column families, verifying consistency and avoidance of stale references after updates or rollbacks.
82-92
: Synchronize main record deletion with CF references.The
delete_from_cfs
implementation only removes references inBlockIdColumnFamily
andTransactionIdColumnFamily
. Confirm that the main key is also removed or handled elsewhere to keep data consistent and avoid orphaned records.dan_layer/state_store_rocksdb/src/model/transaction_pool.rs (2)
36-71
: Verify partial updates with test coverage.
try_convert
mergesTransactionPoolStateUpdateModelData
into aTransactionPoolRecord
. Ensure robust coverage to confirm that all edge cases—e.g., partially populated fields—are handled correctly.
32-34
: Neat key derivation approach.Using
transaction_id
as part of the key for items in the transaction pool is clear and provides an intuitive reference for lookups.dan_layer/state_store_rocksdb/src/model/locked_block.rs (1)
36-43
: Consider verifying creation time source.
TheFrom<&LockedBlock>
implementation setscreated_at
by callingRocksdbTimestamp::now()
, which may not precisely reflect the block’s original creation or locking time. If your design requires tracking the exact moment the block was locked, you might consider passing the relevant timestamp from upstream or adding an inline comment clarifying this choice.dan_layer/storage/src/consensus_models/transaction_pool.rs (2)
494-496
: LGTM! Added getter for direct field accessThe addition of the
stage()
getter method provides a clear way to access thestage
field directly, which is a good practice for maintaining encapsulation.
615-623
: LGTM! Consistent setter methods for field manipulationThe addition of
set_pending_stage()
andset_stage()
methods provides a consistent interface for manipulating these fields, which improves code readability and maintainability.dan_layer/state_store_rocksdb/src/model/quorum_certificate.rs (1)
54-67
:✅ Verification successful
Verify main item storage is handled elsewhere
The
put_in_cfs
method only adds entries to the column families but doesn't appear to store the main item itself. Ensure this is intentional and that the main item storage is handled elsewhere in the codebase.
🏁 Script executed:
#!/bin/bash # Check if main item storage is handled in RocksdbModel implementation or elsewhere # Look for the pattern where put_in_cfs is called, to see if main storage happens there rg -A 20 "pub fn put\(" dan_layer/state_store_rocksdb/src/model/traits.rs # Look for any RocksdbModel implementation that might show the pattern rg -A 5 "impl RocksdbModel" dan_layer/state_store_rocksdb/src/Length of output: 16360
Main item storage is intentionally handled elsewhere
After verifying the RocksdbModel implementations across the repository (including the QuorumCertificateModel in the quorum_certificate.rs file), it's clear that the design splits responsibilities. Theput_in_cfs
method exclusively updates the column families with reference keys, while the main item storage is managed by the higher-levelput()
method defined in the RocksdbModel trait. No changes are needed here.dan_layer/state_store_rocksdb/src/model/pending_state_tree_diff.rs (2)
29-35
: LGTM! Well-structured data modelThe
PendingStateTreeDiffData
struct has a clear purpose with appropriate fields, and properly implements serialization/deserialization.
37-42
: LGTM! Clean conversion implementationThe
From
trait implementation for convertingPendingStateTreeDiffData
toconsensus_models::PendingShardStateTreeDiff
is straightforward and effective.dan_layer/state_store_rocksdb/src/model/block_diff.rs (4)
1-22
: License header is valid.
No issues found; the header matches standard open-source licensing practices.
23-31
: Imports overview.
All necessary dependencies for serialization, storage models, and utility functions are properly imported.
33-40
: Struct definition for BlockDiffData looks solid.
The exposed fields are appropriate for identifying a substate change associated with a given block.
49-63
: BlockDiffModel key prefix construction.
The methodsbuild_key_prefix_str
andbuild_key_prefix
appear consistent and straightforward. Ensure thatblock_id.to_string()
always produces a safely parseable format for external consumers.dan_layer/storage_tests/src/helper.rs (12)
1-22
: License header is valid.
No issues found; the header matches standard open-source licensing practices.
23-31
: Imports overview.
All required crates for cryptography, random generation, and custom domain types are present.
52-59
:create_rocksdb
usage is straightforward.
Using a temporary directory for test databases is convenient and clean.
60-66
:create_sqlite
usage for in-memory mode looks correct.
Foreign keys are turned off, which is acceptable for these test scenarios.
80-85
:create_random_substate_id
approach.
Generates a randomComponentKey
for test. The approach is adequate, though be mindful that large tests could create many unique entity IDs.
87-102
:build_substate_record
correctness.
Fields are filled with defaults or the provided substate ID. This is suitable for unit testing.
104-116
:build_substate_value
uses default addresses and rules.
Method appears correct for generating a default component substate.
118-122
:copy_fixed
potential panic if slice size differs.
copy_from_slice
will panic if the source is not exactly the same length. For robust tests, consider asserting length before copying.
124-127
:assert_eq_debug
utility.
Neat approach for comparing by Debug format.
129-131
:create_random_block_id
is straightforward.
Generates a unique block ID for testing. No issues.
133-136
:create_random_hash
is standard.
Properly returns a random 256-bit hash from OS RNG.
138-143
:create_random_vn_signature
uses random keys.
Correct usage of Schnorr signature generation.dan_layer/state_store_rocksdb/src/model/transaction_pool_state_update.rs (5)
1-22
: License header is valid.
No issues found; it meets open-source licensing requirements.
23-27
: Imports overview.
Imports forDecision
,Evidence
, andTransactionId
are properly utilized.
29-42
:TransactionPoolStateUpdateModelData
fields appear comprehensive.
Covers block metadata, fee details, readiness, local and remote decisions. This is likely sufficient for a broad range of transaction states.
44-61
: Descending key ordering viablock_height_desc
.
Subtracting the block height fromu64::MAX
is a valid approach to invert the sorting order in RocksDB prefix scans. Just be mindful that extremely large block heights nearu64::MAX
could produce small or zero remainders, but this is typically expected.
63-73
: RocksdbModel trait usage.
Key prefix and the delegatedkey
function match standard usage.dan_layer/state_store_rocksdb/src/model/burnt_utxo.rs (7)
1-22
: License header is valid.
No issues found; standard open-source licensing text.
23-31
: Imports overview.
Imports for managing transaction DB, burnt UTXOs, and confidential addresses are appropriate.
32-38
: BurntUtxoModel definition.
Serves as a logical model wrapper for burnt UTXOs in RocksDB.
40-50
: RocksdbModel key construction forBurntUtxoModel
.
Storing the commitment-based key is a simple approach. Confirm thatcommitment.to_string()
is collision-free, typically guaranteed by domain constraints.
51-79
: Write and delete operations in column families.
The usage ofmain_key
as a reference in the column family is consistent. No issues found with the logic forput_in_cfs
anddelete_from_cfs
.
81-90
:ProposedInColumnFamily
is well-scoped.
The constantUNPROPOSED_VALUE
is a sensible placeholder; just ensure it doesn't collide with real block IDs if the system never uses "None" as a valid block ID.
92-106
: Key building approach inProposedInColumnFamily
.
Appropriately includes block ID or “None” prefix. The approach is consistent with how the main key is declared.dan_layer/state_store_rocksdb/src/model/state_transition.rs (2)
44-57
: Consider verifying correctness of data encoding.
Whilebor_encode
might be necessary to handle incompatibilities, any misalignment in encoding/decoding can lead to subtle runtime errors. Please ensure there's thorough testing for this constructor, particularly verifying that encoded data matches decoding logic in downstream usage.
103-123
: Shard-based key prefix generation is correct, but consider validating shard uniqueness.
The approach of combining shard + seq is straightforward, but if there's any possibility that sequences overlap or shards change over time, you might experience collisions. A quick uniqueness verification or test coverage could prevent subtle data inconsistencies.dan_layer/state_store_rocksdb/src/model/lock_conflict.rs (2)
34-41
: Ensure concurrency scenarios are tested.
Lock conflicts typically arise in high-concurrency situations. Make sure you have tests or plans to simulate concurrent transactions to confirmLockConflictData
fields (e.g.,depends_on_tx
,created_at
) are properly set and handled.
80-88
: Watch for silent deletions indelete_from_cfs
.
Similar to insertion, if the item doesn’t exist or the CF handle fails, there's no logging or additional feedback. Depending on auditing needs, you might detect or handle such silent failures more explicitly.dan_layer/state_store_rocksdb/src/model/block.rs (1)
108-146
: Handle missing CF inget_cf
with better error feedback.
Currently, a missing CF handle triggers anunwrap()
or leads toNotFound
. Distinguishing between a truly missing CF vs. an empty CF can help debugging. Additionally, the final block retrieval also uses unwrap for theBlockId
. Consider capturing these errors carefully.dan_layer/state_store_rocksdb/src/model/block_transaction_execution.rs (1)
50-62
: Key Construction Implementation
Thekey_prefix_by_transaction_and_block
method constructs a key prefix that includes optional block ID data. This design is clear and flexible for partial indexing. Ensure you’ve tested lookups with and without a block ID to avoid collisions or search ambiguities.dan_layer/state_store_rocksdb/src/model/foreign_proposal.rs (2)
1-22
: License Header Check
This file includes a standard open-source license header. The format and content appear consistent with the rest of the project. No changes needed.
63-80
: Atomicity of Multi-CF Operations
You're updating multiple column families within a single transaction. This is good for ensuring atomic updates. Verify that the transaction is rolled back correctly if any CF operation fails, which will prevent partial data corruption across column families.dan_layer/state_store_rocksdb/src/reader.rs (2)
239-258
: Verify ancestor relationship before traversing back to genesis.
Ifstart_block
is not in the same chain asend_block
, this loop continues until genesis. You might want to verify thatstart_block
is indeed an ancestor, or handle the case where the chain diverges.Would you like a script to confirm in the codebase that
start_block
is always an ancestor ofend_block
before calling this function?
617-627
: Check block ancestry for consistency.
Intransaction_executions_get_pending_for_block
, you guard againstfrom_block_id
not existing, but there is no explicit check for whethercommit_block_id
is truly an ancestor offrom_block_id
. Consider gracefully handling or logging the scenario wherefrom_block_id
does not descend from the commit block.dan_layer/state_store_rocksdb/src/model/mod.rs (1)
23-56
: Overall module structure looks good.
The organized separation of concerns into multiple submodules is clear and follows typical Rust practices.dan_layer/state_store_rocksdb/src/writer.rs (3)
215-225
: Verify time unit consistency when calculatingblock_time
.
now
is measured in milliseconds, butblock.timestamp()
may be in a different unit (e.g. seconds), potentially causing an incorrect or negative subtraction result. Confirm that both values share the same time granularity.
620-628
: Clarify block existence checks for concurrency edge cases.
self.blocks_exists(&block_id)?
helps ensure a referenced block is present. Consider whether parallel writes/reads could invalidate this assumption and if additional synchronization or repeated checks are needed.
1530-1535
: Warn level for uncommitted transaction drop.
Logging a warning is helpful, but consider if an error or panic might be more appropriate in a production scenario where an uncommitted transaction implies a severe usage error.
pub trait ModelColumnFamily { | ||
type Item: Serialize; | ||
|
||
fn name() -> &'static str; | ||
|
||
fn build_key(value: &Self::Item) -> String; | ||
|
||
fn put( | ||
db: Arc<TransactionDB>, | ||
tx: &mut Transaction<'_, TransactionDB>, | ||
operation: &'static str, | ||
item: &Self::Item, | ||
value: &[u8], | ||
) -> Result<(), RocksDbStorageError> { | ||
let key = Self::build_key(item); | ||
let cf = db.cf_handle(Self::name()).unwrap(); | ||
tx.put_cf(cf, key, value) | ||
.map_err(|e| RocksDbStorageError::RocksDbError { operation, source: e })?; | ||
|
||
Ok(()) | ||
} | ||
|
||
fn delete( | ||
db: Arc<TransactionDB>, | ||
tx: &Transaction<'_, TransactionDB>, | ||
operation: &'static str, | ||
item: &Self::Item, | ||
) -> Result<(), RocksDbStorageError> { | ||
let key = Self::build_key(item); | ||
let cf = db.cf_handle(Self::name()).unwrap(); | ||
tx.delete_cf(cf, key) | ||
.map_err(|e| RocksDbStorageError::RocksDbError { operation, source: e })?; | ||
|
||
Ok(()) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid panics when retrieving column families.
Calling db.cf_handle(Self::name()).unwrap()
can panic if the column family is missing or misnamed. Instead, consider gracefully handling this error by returning a custom RocksDbStorageError
to avoid unexpected runtime failures.
- let cf = db.cf_handle(Self::name()).unwrap();
+ let cf = db.cf_handle(Self::name())
+ .ok_or_else(|| RocksDbStorageError::ColumnFamilyNotFound {
+ name: Self::name(),
+ operation,
+ })?;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pub trait ModelColumnFamily { | |
type Item: Serialize; | |
fn name() -> &'static str; | |
fn build_key(value: &Self::Item) -> String; | |
fn put( | |
db: Arc<TransactionDB>, | |
tx: &mut Transaction<'_, TransactionDB>, | |
operation: &'static str, | |
item: &Self::Item, | |
value: &[u8], | |
) -> Result<(), RocksDbStorageError> { | |
let key = Self::build_key(item); | |
let cf = db.cf_handle(Self::name()).unwrap(); | |
tx.put_cf(cf, key, value) | |
.map_err(|e| RocksDbStorageError::RocksDbError { operation, source: e })?; | |
Ok(()) | |
} | |
fn delete( | |
db: Arc<TransactionDB>, | |
tx: &Transaction<'_, TransactionDB>, | |
operation: &'static str, | |
item: &Self::Item, | |
) -> Result<(), RocksDbStorageError> { | |
let key = Self::build_key(item); | |
let cf = db.cf_handle(Self::name()).unwrap(); | |
tx.delete_cf(cf, key) | |
.map_err(|e| RocksDbStorageError::RocksDbError { operation, source: e })?; | |
Ok(()) | |
} | |
} | |
pub trait ModelColumnFamily { | |
type Item: Serialize; | |
fn name() -> &'static str; | |
fn build_key(value: &Self::Item) -> String; | |
fn put( | |
db: Arc<TransactionDB>, | |
tx: &mut Transaction<'_, TransactionDB>, | |
operation: &'static str, | |
item: &Self::Item, | |
value: &[u8], | |
) -> Result<(), RocksDbStorageError> { | |
let key = Self::build_key(item); | |
- let cf = db.cf_handle(Self::name()).unwrap(); | |
+ let cf = db.cf_handle(Self::name()) | |
+ .ok_or_else(|| RocksDbStorageError::ColumnFamilyNotFound { | |
+ name: Self::name(), | |
+ operation, | |
+ })?; | |
tx.put_cf(cf, key, value) | |
.map_err(|e| RocksDbStorageError::RocksDbError { operation, source: e })?; | |
Ok(()) | |
} | |
fn delete( | |
db: Arc<TransactionDB>, | |
tx: &Transaction<'_, TransactionDB>, | |
operation: &'static str, | |
item: &Self::Item, | |
) -> Result<(), RocksDbStorageError> { | |
let key = Self::build_key(item); | |
- let cf = db.cf_handle(Self::name()).unwrap(); | |
+ let cf = db.cf_handle(Self::name()) | |
+ .ok_or_else(|| RocksDbStorageError::ColumnFamilyNotFound { | |
+ name: Self::name(), | |
+ operation, | |
+ })?; | |
tx.delete_cf(cf, key) | |
.map_err(|e| RocksDbStorageError::RocksDbError { operation, source: e })?; | |
Ok(()) | |
} | |
} |
fn key(_value: &Self::Item) -> String { | ||
// values don't need to be fetched by any specific field, we only want to fetch the last one | ||
let timestamp = RocksdbTimestamp::now(); | ||
format!("{}_{}", Self::key_prefix(), timestamp) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider a more efficient key storage strategy for LastExecuted.
The current implementation uses timestamps to generate unique keys for each LastExecuted record, but comments indicate only the most recent record is needed. This could lead to:
- Accumulation of unused historical records
- Multiple records with the same timestamp in high-frequency environments
Consider using a single fixed key to always store the latest value, which would be more efficient for both storage and retrieval.
- fn key(_value: &Self::Item) -> String {
- // values don't need to be fetched by any specific field, we only want to fetch the last one
- let timestamp = RocksdbTimestamp::now();
- format!("{}_{}", Self::key_prefix(), timestamp)
- }
+ fn key(_value: &Self::Item) -> String {
+ // Use a fixed key since we only care about the latest value
+ format!("{}_latest", Self::key_prefix())
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
fn key(_value: &Self::Item) -> String { | |
// values don't need to be fetched by any specific field, we only want to fetch the last one | |
let timestamp = RocksdbTimestamp::now(); | |
format!("{}_{}", Self::key_prefix(), timestamp) | |
} | |
fn key(_value: &Self::Item) -> String { | |
// Use a fixed key since we only care about the latest value | |
format!("{}_latest", Self::key_prefix()) | |
} |
# TODO: use the next release (the one after 0.22.0) when available | ||
# https://github.com/rust-rocksdb/rust-rocksdb/issues/881 | ||
rocksdb = { git = "https://github.com/rust-rocksdb/rust-rocksdb", rev = "1cf906dc4087f06631820f13855e6b27bd21b972" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
RocksDB dependency with specific commit reference.
The comment explains why a specific git revision is being used instead of a release version, which is good practice. Consider periodically checking for the next release to replace this pinned revision.
🌐 Web query:
What is the current latest release version of rust-rocksdb?
💡 Result:
The latest release version of rust-rocksdb
as of February 2025 is v0.36.0, released on January 3, 2025[10]. This information comes directly from the crates.io registry entry for the package[10], which serves as Rust's official package repository.
Key details about recent releases:
- v0.23.0 (December 24, 2024) introduced RocksDB 9.0 support and iOS build improvements[6]
- v0.21.0 (May 9, 2023) added closure-based comparators and RocksDB 8.x support[2][5]
- v0.19.0 (August 5, 2022) first introduced
io_uring
support and transaction handling[1][4]
The project maintains regular releases approximately every 80 days on average[4], with significant RocksDB engine updates typically aligned with upstream releases from Meta (Facebook). For production use cases requiring LSM-tree databases in Rust environments, v0.36.x represents the current stable branch with modern RocksDB integrations[8][10].
Citations:
- 1: https://rust.libhunt.com/rust-rocksdb-latest-version
- 2: https://github.com/rust-rocksdb/rust-rocksdb/blob/master/CHANGELOG.md
- 3: https://doc.rust-lang.org/beta/releases.html
- 4: https://rust.libhunt.com/rust-rocksdb-changelog
- 5: https://docs.rs/crate/rocksdb/latest/source/CHANGELOG.md
- 6: https://github.com/rust-rocksdb/rust-rocksdb/releases
- 7: https://crates.io/crates/rust-rocksdb/versions
- 8: https://github.com/rust-rocksdb/rust-rocksdb
- 9: https://sourceforge.net/projects/rust-rocksdb.mirror/files/v0.22.0/
- 10: https://crates.io/crates/rust-rocksdb
Action Required: Update RocksDB Dependency
The Cargo.toml currently pins rust-rocksdb
to a specific commit (rev = "1cf906dc4087f06631820f13855e6b27bd21b972"
) with a TODO note to switch once a later release than 0.22.0 is available. Since the latest release is now v0.36.0 (released January 2025), please consider updating the dependency.
- Replace the git dependency with the official release version if possible:
rocksdb = "0.36.0"
- Verify that the upgrade to v0.36.0 is compatible with your existing functionality before merging the change.
pub fn key_from_block_and_sender(block_id: &BlockId, sender_leaf_hash_opt: Option<&FixedHash>) -> String { | ||
let sender_leaf_hash = sender_leaf_hash_opt.map(|h| h.to_string()).unwrap_or_default(); | ||
format!("{}_{}_{}", Self::key_prefix(), block_id, sender_leaf_hash) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Consider additional validation for key formatting.
The key format relies on string concatenation with underscores as separators. If either the block_id or sender_leaf_hash contains underscores, this could lead to ambiguous keys. Consider adding validation or using a more robust serialization method for key generation.
🏁 Script executed:
#!/bin/bash
# Check if BlockId or FixedHash implementations could produce strings with underscores
echo "Checking BlockId implementation:"
rg -A 5 "impl.*BlockId" || echo "No BlockId implementation found"
echo "Checking FixedHash representation:"
rg -A 5 "impl.*Display.*for.*FixedHash" || echo "No FixedHash Display implementation found"
echo "Checking ToString implementation:"
rg -A 5 "impl.*ToString.*for.*FixedHash" || echo "No explicit ToString implementation found"
Length of output: 7210
Key Generation: Address Potential Ambiguity with Underscore Delimiters
- In
dan_layer/state_store_rocksdb/src/model/vote.rs
(lines 31–33), the key is constructed by concatenating the key prefix,block_id
, and an optional sender leaf hash using underscores. - Verification confirms that neither the
Display
implementation forBlockId
(which defers to its inner value) nor any explicitDisplay
/ToString
implementation forFixedHash
handles underscore escaping. - This leaves open the possibility that if either
block_id
orsender_leaf_hash
contains underscores, the resulting key could become ambiguous. - Recommendation: Add validation to ensure these string representations do not include underscores or adopt a more robust, unambiguous serialization approach for key generation.
// last executed | ||
let mut last_exec = LastExecuted { | ||
block_id: BlockId::genesis(), | ||
height: NodeHeight(123), | ||
epoch: Epoch::zero(), | ||
}; | ||
tx.last_executed_set(&last_exec).unwrap(); | ||
let res = tx.last_executed_get().unwrap(); | ||
assert_eq_debug(&res, &last_exec); | ||
|
||
last_exec.epoch += Epoch(1); | ||
|
||
tx.last_executed_set(&last_exec).unwrap(); | ||
let res = tx.last_executed_get().unwrap(); | ||
assert_eq_debug(&res, &last_exec); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate test for LastExecuted.
This block of code is a duplicate of the previous LastExecuted test (lines 89-104). It appears that you've unintentionally duplicated the same test case.
- // last executed
- let mut last_exec = LastExecuted {
- block_id: BlockId::genesis(),
- height: NodeHeight(123),
- epoch: Epoch::zero(),
- };
- tx.last_executed_set(&last_exec).unwrap();
- let res = tx.last_executed_get().unwrap();
- assert_eq_debug(&res, &last_exec);
-
- last_exec.epoch += Epoch(1);
-
- tx.last_executed_set(&last_exec).unwrap();
- let res = tx.last_executed_get().unwrap();
- assert_eq_debug(&res, &last_exec);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// last executed | |
let mut last_exec = LastExecuted { | |
block_id: BlockId::genesis(), | |
height: NodeHeight(123), | |
epoch: Epoch::zero(), | |
}; | |
tx.last_executed_set(&last_exec).unwrap(); | |
let res = tx.last_executed_get().unwrap(); | |
assert_eq_debug(&res, &last_exec); | |
last_exec.epoch += Epoch(1); | |
tx.last_executed_set(&last_exec).unwrap(); | |
let res = tx.last_executed_get().unwrap(); | |
assert_eq_debug(&res, &last_exec); |
fn build_key(value: &Self::Item) -> String { | ||
let address = value.to_substate_address(); | ||
Self::build_key_by_transaction(&value.created_by_transaction, Some(&address)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likely incorrect reference to created-by transaction.
In DestroyedByTxColumnFamily::build_key
, the code uses value.created_by_transaction
instead of a destroyed_by_transaction
. If a substate is truly destroyed, this index should reference the destroying transaction, not the transaction that created it. This might result in data misclassification or inaccurate queries.
fn build_key(value: &Self::Item) -> String {
let address = value.to_substate_address();
- Self::build_key_by_transaction(&value.created_by_transaction, Some(&address))
+ if let Some(SubstateDestroyed { destroyed_by_tx, .. }) = &value.destroyed {
+ Self::build_key_by_transaction(destroyed_by_tx, Some(&address))
+ } else {
+ // handle the case where the substate is not destroyed
+ String::new()
+ }
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
fn build_key(value: &Self::Item) -> String { | |
let address = value.to_substate_address(); | |
Self::build_key_by_transaction(&value.created_by_transaction, Some(&address)) | |
} | |
fn build_key(value: &Self::Item) -> String { | |
let address = value.to_substate_address(); | |
if let Some(SubstateDestroyed { destroyed_by_tx, .. }) = &value.destroyed { | |
Self::build_key_by_transaction(destroyed_by_tx, Some(&address)) | |
} else { | |
// handle the case where the substate is not destroyed | |
String::new() | |
} | |
} |
pub fn now() -> Self { | ||
Self(SystemTime::now().duration_since(UNIX_EPOCH).unwrap().as_millis()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle potential panic in timestamp calculation
The now()
method uses unwrap()
which will panic if the system time is before the UNIX epoch (which can happen due to system clock synchronization issues). Consider handling this error case gracefully.
pub fn now() -> Self {
- Self(SystemTime::now().duration_since(UNIX_EPOCH).unwrap().as_millis())
+ Self(
+ SystemTime::now()
+ .duration_since(UNIX_EPOCH)
+ .unwrap_or_else(|_| std::time::Duration::from_secs(0))
+ .as_millis()
+ )
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pub fn now() -> Self { | |
Self(SystemTime::now().duration_since(UNIX_EPOCH).unwrap().as_millis()) | |
} | |
pub fn now() -> Self { | |
Self( | |
SystemTime::now() | |
.duration_since(UNIX_EPOCH) | |
.unwrap_or_else(|_| std::time::Duration::from_secs(0)) | |
.as_millis() | |
) | |
} |
pub fn multi_get_ids_by_cf_range( | ||
db: Arc<TransactionDB>, | ||
tx: &Transaction<'_, TransactionDB>, | ||
_operation: &'static str, | ||
cf: &str, | ||
lower_prefix: &str, | ||
upper_prefix: &str, | ||
) -> Result<Vec<BlockId>, RocksDbStorageError> { | ||
let mut options = rocksdb::ReadOptions::default(); | ||
options.set_iterate_lower_bound(lower_prefix.as_bytes()); | ||
options.set_iterate_upper_bound(upper_prefix.as_bytes()); | ||
|
||
let cf = db.cf_handle(cf).unwrap(); | ||
|
||
let iterator = tx.iterator_cf_opt(cf, options, rocksdb::IteratorMode::Start); | ||
let values = iterator | ||
.map(|item| { | ||
// TODO: properly handle errors and avoid unwraps | ||
let (_, value) = item.unwrap(); | ||
BlockId::try_from(value.to_vec()).unwrap() | ||
}) | ||
.collect(); | ||
|
||
Ok(values) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure range boundaries are correct and handle iteration errors.
Similarly, multi_get_ids_by_cf_range
uses unwrap()
(line 88) in the iterator. If upper_prefix
is malformed or RocksDB has a read error, the iterator will panic. Graceful error handling is recommended.
db: Arc<TransactionDB>, | ||
tx: &Transaction<'_, TransactionDB>, | ||
_operation: &'static str, | ||
cf: &str, | ||
prefix: &str, | ||
) -> Result<Vec<BlockId>, RocksDbStorageError> { | ||
let mut options = rocksdb::ReadOptions::default(); | ||
options.set_iterate_range(rocksdb::PrefixRange(prefix.as_bytes())); | ||
|
||
let cf = db.cf_handle(cf).unwrap(); | ||
|
||
let iterator = tx.iterator_cf_opt(cf, options, rocksdb::IteratorMode::Start); | ||
let values = iterator | ||
.map(|item| { | ||
// TODO: properly handle errors and avoid unwraps | ||
let (_, value) = item.unwrap(); | ||
BlockId::try_from(value.to_vec()).unwrap() | ||
}) | ||
.collect(); | ||
|
||
Ok(values) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid unwrap()
to prevent runtime panics.
In multi_get_ids_by_cf
, the iterator calls item.unwrap()
(line 62). This can panic if RocksDB produces an error mid-iteration. Consider handling errors gracefully via logging or returning a Result
to avoid a process-wide crash.
- let (_, value) = item.unwrap();
+ let (_, value) = match item {
+ Ok((_, v)) => ((), v),
+ Err(e) => return Err(RocksDbStorageError::RocksDbError {
+ operation: "_operation",
+ source: e,
+ }),
+ };
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
db: Arc<TransactionDB>, | |
tx: &Transaction<'_, TransactionDB>, | |
_operation: &'static str, | |
cf: &str, | |
prefix: &str, | |
) -> Result<Vec<BlockId>, RocksDbStorageError> { | |
let mut options = rocksdb::ReadOptions::default(); | |
options.set_iterate_range(rocksdb::PrefixRange(prefix.as_bytes())); | |
let cf = db.cf_handle(cf).unwrap(); | |
let iterator = tx.iterator_cf_opt(cf, options, rocksdb::IteratorMode::Start); | |
let values = iterator | |
.map(|item| { | |
// TODO: properly handle errors and avoid unwraps | |
let (_, value) = item.unwrap(); | |
BlockId::try_from(value.to_vec()).unwrap() | |
}) | |
.collect(); | |
Ok(values) | |
} | |
db: Arc<TransactionDB>, | |
tx: &Transaction<'_, TransactionDB>, | |
_operation: &'static str, | |
cf: &str, | |
prefix: &str, | |
) -> Result<Vec<BlockId>, RocksDbStorageError> { | |
let mut options = rocksdb::ReadOptions::default(); | |
options.set_iterate_range(rocksdb::PrefixRange(prefix.as_bytes())); | |
let cf = db.cf_handle(cf).unwrap(); | |
let iterator = tx.iterator_cf_opt(cf, options, rocksdb::IteratorMode::Start); | |
let values = iterator | |
.map(|item| { | |
// TODO: properly handle errors and avoid unwraps | |
let (_, value) = match item { | |
Ok((_, v)) => ((), v), | |
Err(e) => return Err(RocksDbStorageError::RocksDbError { | |
operation: "_operation", | |
source: e, | |
}), | |
}; | |
BlockId::try_from(value.to_vec()).unwrap() | |
}) | |
.collect(); | |
Ok(values) | |
} |
pub(crate) fn rollback(self) -> Result<(), RocksDbStorageError> { | ||
self.tx.rollback().map_err(|source| RocksDbStorageError::RocksDbError { | ||
source, | ||
operation: "commit", | ||
})?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the rollback operation label.
The error context currently states "commit"
instead of "rollback"
. This may confuse debugging efforts.
- operation: "commit",
+ operation: "rollback",
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pub(crate) fn rollback(self) -> Result<(), RocksDbStorageError> { | |
self.tx.rollback().map_err(|source| RocksDbStorageError::RocksDbError { | |
source, | |
operation: "commit", | |
})?; | |
pub(crate) fn rollback(self) -> Result<(), RocksDbStorageError> { | |
self.tx.rollback().map_err(|source| RocksDbStorageError::RocksDbError { | |
source, | |
operation: "rollback", | |
})?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
dan_layer/storage_tests/src/misc.rs (2)
53-54
: Consider breaking down the large test functionWhile
#[allow(clippy::too_many_lines)]
is acceptable for test code, consider whether this function could be broken down into smaller, more focused test functions. This would improve maintainability and make it easier to identify the cause of test failures.
206-220
: Consider adding more comprehensive tests for foreign parked blocksThe current test only verifies the existence of a foreign parked block after insertion. Consider adding more comprehensive tests that verify retrieval of the block content and testing edge cases like updating or deleting foreign parked blocks.
dan_layer/storage_tests/src/foreign_proposals.rs (1)
1-2
: Update copyright year.The copyright year is set to 2025, which appears to be incorrect as we're currently in 2024.
-// Copyright 2025 The Tari Project +// Copyright 2024 The Tari Projectdan_layer/storage_tests/src/foreign_substate_pledges.rs (2)
1-2
: Update copyright year.The copyright year is set to 2025, which appears to be incorrect as we're currently in 2024.
-// Copyright 2025 The Tari Project +// Copyright 2024 The Tari Project
6-12
: Consider a more descriptive module name.The module name
last_inserted
seems disconnected from the functionality being tested, which is focused on foreign substate pledges. A more descriptive name likeforeign_substate_pledges_test
would better reflect the module's purpose.-mod last_inserted { +mod foreign_substate_pledges_test {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
dan_layer/storage_tests/src/foreign_proposals.rs
(1 hunks)dan_layer/storage_tests/src/foreign_substate_pledges.rs
(1 hunks)dan_layer/storage_tests/src/misc.rs
(1 hunks)dan_layer/storage_tests/src/pending_state_tree_diff.rs
(1 hunks)dan_layer/storage_tests/src/substates.rs
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- dan_layer/storage_tests/src/pending_state_tree_diff.rs
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: test
- GitHub Check: check nightly
- GitHub Check: check stable
- GitHub Check: clippy
🔇 Additional comments (13)
dan_layer/storage_tests/src/misc.rs (4)
38-39
: Test functions are marked with#[ignore]
Both
miscellaneous_sqlite
andmiscellaneous_rocksdb
test functions are marked with#[ignore]
, which means they won't be run as part of the normal test suite. Was this intentional? If these tests should be run regularly, consider removing the#[ignore]
attribute.Also applies to: 46-47
107-122
: Duplicate test for LastExecuted.This block of code is a duplicate of the previous LastExecuted test (lines 91-104). It appears that you've unintentionally duplicated the same test case.
- // last executed - let mut last_exec = LastExecuted { - block_id: BlockId::genesis(), - height: NodeHeight(123), - epoch: Epoch::zero(), - }; - tx.last_executed_set(&last_exec).unwrap(); - let res = tx.last_executed_get().unwrap(); - assert_eq_debug(&res, &last_exec); - - last_exec.epoch += Epoch(1); - - tx.last_executed_set(&last_exec).unwrap(); - let res = tx.last_executed_get().unwrap(); - assert_eq_debug(&res, &last_exec);
221-233
: Good test for state tree node stalenessThis test effectively verifies both the insertion of state tree nodes and the handling of stale nodes. The verification that a stale node cannot be retrieved is particularly important for ensuring data integrity.
234-234
: Proper transaction managementGood practice to roll back the transaction at the end of the test to ensure that the database state remains unchanged, which helps maintain test isolation.
dan_layer/storage_tests/src/foreign_proposals.rs (3)
67-72
: Future implementation noted.The TODO comment indicates that some functionality is not yet implemented. This is well-documented and the test code is appropriately commented out.
73-77
: Future implementation noted.The TODO comment indicates that some functionality is not yet implemented. This is well-documented and the test code is appropriately commented out.
34-97
: Good comprehensive test coverage.The function tests a complete lifecycle of a foreign proposal, including creation, retrieval, status updates, and deletion. The transaction is properly rolled back at the end, which is good practice.
dan_layer/storage_tests/src/substates.rs (3)
10-23
: Well-implemented utility function.The
random_substate_id()
function is well-written and provides a good utility for generating random substate IDs for testing purposes.
50-165
: Comprehensive test coverage for substate operations.The
basic_substate_operations
function thoroughly tests various aspects of substate management, including creation, retrieval with different conditions, checking existence, and handling state transitions. The transaction is properly rolled back at the end.
35-48
:❓ Verification inconclusive
Both tests are marked with
#[ignore]
.Both the SQLite and RocksDB tests are marked with
#[ignore]
, which means they won't be run by default. Ensure this is intentional, possibly due to them being time-consuming or requiring special setup.
🏁 Script executed:
#!/bin/bash # Let's check if there's a pattern of ignoring both SQLite and RocksDB tests echo "Checking pattern of ignoring both SQLite and RocksDB tests:" rg "#\[ignore\].*basic_substate_operations_(sqlite|rocksdb)" "dan_layer/storage_tests/src/*.rs"Length of output: 299
Below is a revised review comment. I’m suggesting to verify manually that the tests in
dan_layer/storage_tests/src/substates.rs
are intentionally ignored, since our initial search script didn’t yield results:
Attention: Verify That Ignored Tests Are Intentional
Both the SQLite and RocksDB tests in
dan_layer/storage_tests/src/substates.rs
are marked with#[ignore]
. Please confirm this is intentional (for example, they might be temporarily disabled due to long runtime or special environment requirements). If these tests are meant to be skipped under normal circumstances, consider documenting the rationale.Locations to check:
dan_layer/storage_tests/src/substates.rs
(ensure the#[ignore]
annotations are deliberate)
If these tests should instead run by default or the ignore annotations were left by mistake, please adjust accordingly.
dan_layer/storage_tests/src/foreign_substate_pledges.rs (3)
14-27
: Both tests are marked with#[ignore]
.Both the SQLite and RocksDB tests are marked with
#[ignore]
, ensuring consistency. Make sure this is intentional, as it means these tests won't run by default.
29-51
: Well-structured test of foreign substate pledges.The function properly tests the lifecycle of foreign substate pledges, including saving, retrieving, and removing them. The transaction is correctly rolled back at the end.
53-59
: Good helper function for test data generation.The
build_substate_pledge()
function encapsulates the creation of test data, which promotes code reuse and readability.
#[ignore] | ||
#[test] | ||
fn foreign_proposals_sqlite() { | ||
let db = create_sqlite(); | ||
db.foreign_keys_off().unwrap(); | ||
foreign_proposals_operations(db); | ||
} | ||
|
||
#[test] | ||
fn foreign_proposals_rocksdb() { | ||
let db = create_rocksdb(); | ||
foreign_proposals_operations(db); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Consider consistent test ignoring approach.
The SQLite test is marked with #[ignore]
while the RocksDB test isn't. For consistency, both tests should either be ignored or not, unless there's a specific reason for this difference.
🏁 Script executed:
#!/bin/bash
# Let's check if this pattern of ignoring is consistent across other test files
echo "Checking for patterns of test ignoring across files:"
rg "#\[ignore\]" --before 3 --after 3 "dan_layer/storage_tests/src/*.rs"
Length of output: 632
Ensure consistent test ignoring in foreign_proposals.rs
- In this file, the
foreign_proposals_sqlite
test is annotated with#[ignore]
, while theforeign_proposals_rocksdb
test isn’t. - Please align the test attributes so that both tests are either ignored or enabled, unless there is a specific reason for the discrepancy. If the difference is intentional, adding an inline comment to clarify the rationale would improve clarity.
To verify the consistency across tests, you can run the following corrected shell script:
Once you review the output, please confirm whether both tests should share the same ignoring strategy or if a deliberate exception is intended.
🏁 Script executed:
[run_scripts]
#!/bin/bash
# Re-run the ignore attribute check using correct context flags within the storage tests directory
echo "Checking for #[ignore] usage in tests under dan_layer/storage_tests/src:"
rg --before-context 3 --after-context 3 "#\[ignore\]" dan_layer/storage_tests/src
Length of output: 7038
Ensure consistent test ignoring in foreign_proposals.rs
In dan_layer/storage_tests/src/foreign_proposals.rs
, the SQLite test is marked with #[ignore]
while its RocksDB counterpart is not. Given that similar modules (e.g., foreign_substate_pledges.rs
, misc.rs
, and substates.rs
) either consistently ignore both variants or have an established rationale, please verify whether the discrepancy here is intentional.
- If intentional: Add an inline comment clarifying why only the SQLite test is ignored.
- If not: Update the tests so both are either ignored or enabled for a consistent approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
dan_layer/state_store_rocksdb/src/model/traits.rs (2)
258-258
: Align default prefix usage
The comment indicates that the code should default to the model’s key prefix instead of an empty string. This is a minor improvement that can prevent silent mismatches.
309-321
: Consider using range delete for better performance
Loop-deleting all keys with a shared prefix can be expensive at scale. Instead, explore RocksDB's range delete feature (e.g.delete_range_cf
) to optimize bulk deletions.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (3)
dan_layer/state_store_rocksdb/Cargo.toml
(1 hunks)dan_layer/state_store_rocksdb/src/model/block.rs
(1 hunks)dan_layer/state_store_rocksdb/src/model/traits.rs
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- dan_layer/state_store_rocksdb/Cargo.toml
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: test
- GitHub Check: check stable
- GitHub Check: check nightly
- GitHub Check: clippy
🔇 Additional comments (2)
dan_layer/state_store_rocksdb/src/model/traits.rs (1)
47-47
: Avoid panics when retrieving column families
This.unwrap()
call can panic if the column family is missing or not properly registered, potentially crashing the application. Gracefully handle the error with a customRocksDbStorageError
instead.Also applies to: 61-61, 183-183, 221-221, 273-273
dan_layer/state_store_rocksdb/src/model/block.rs (1)
56-56
: Avoid panics when retrieving column families
Using.unwrap()
ondb.cf_handle(cf)
risks a runtime panic if the CF is missing or misnamed. Handle the error gracefully with a customRocksDbStorageError
.Also applies to: 82-82, 116-116, 246-246
let keys: Vec<String> = iterator | ||
.map(|item| { | ||
// TODO: properly handle errors and avoid unwraps | ||
let (_, value) = item.unwrap(); | ||
// the value is the key in the default CF | ||
String::from_utf8(value.to_vec()).unwrap() | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle iteration and UTF-8 errors gracefully
Currently, the code uses .unwrap()
for both the iterator result and UTF-8 conversion, which can panic if corrupted data is encountered. Please return appropriate errors instead of panicking.
let (_, value) = item.unwrap(); | ||
BlockId::try_from(value.to_vec()).unwrap() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Eliminate .unwrap()
in iterator and BlockId::try_from
usage
These .unwrap()
calls can cause unexpected panics on malformed data. Return a descriptive RocksDbStorageError
to protect the system from unexpected runtime failures.
Also applies to: 88-89, 136-136
Description
state_store_rocksdb
with a RocksDB implementation of the consensus state store traitsRocksdbModel
andModelColumnFamily
BlockModel
->blocks
table) that encapsulates all the basic operations.RocksDbStateStoreReadTransaction
andRocksDbStateStoreWriteTransaction
implements the database operations (e.g. queries) by using the model operations.database_type
to choose between"rocksdb"
and"sqlite"
(default). The config option is only a placeholder for now, as the Sqlite database is being used across the codebase.Motivation and Context
Currently we only have a SQLite implementation of the VN state store. This is fine for development but something like RocksDB should result in a performance improvement.
This PR implements (most of) the database operations using RocksDB. The new config option
database_type
allows to choose between SQLite or RocksDB. This PR sets the default database implementation to SQLite to avoid breaking changes.Future work on the RocksDB data store will revolve around:
todo!
Rust macro)database_type
)How Has This Been Tested?
There is a new crate
storage_tests
with unit tests for the database operations. It covers some of the most common operations.Further work would be required to make existing consensus tests and integration tests pass using RocksDB.
What process can a PR reviewer use to test or verify this change?
See previous section
Breaking Changes
Summary by CodeRabbit
New Features
Tests
Refactor & Chores