-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add basic cucumber tests #254
base: development
Are you sure you want to change the base?
feat: add basic cucumber tests #254
Conversation
f21cb6b
to
5d28125
Compare
WalkthroughThe changes apply broadly across CI workflows, integration tests, and the sha_p2pool project. Workflow files have been updated to narrow triggers, adjust toolchain settings, and simplify scheduling. New scripts and configuration files enhance logging, CLI handling, and error reporting. Several new Rust modules and functions have been added or modified to manage base node processes, p2pool nodes, mining, networking utilities, and integration tests. The sha_p2pool package now adopts a dedicated Cargo.toml and a revised workspace structure, while several internal components have been exposed publicly and logging details refined. Changes
Sequence Diagram(s)sequenceDiagram
participant TR as Test Runner
participant TW as TariWorld
participant BN as BaseNodeProcess
participant PN as P2PoolProcess
participant M as Miner Module
TR->>TW: Initialize test environment
TW->>BN: Spawn base node (seed/non-seed)
TW->>PN: Spawn p2pool node (seed/regular)
M->>PN: Mine and submit block
PN->>TW: Report block status (height, connectivity)
TR->>TW: Verify integration test results
Poem
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
f36d4d9
to
bfbdd8b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
🧹 Nitpick comments (21)
integration_tests/src/p2pool_process.rs (5)
30-41
: Consider clarifying field responsibilities with doc comments.The fields in
P2PoolProcess
are fairly self-explanatory, but adding doc comments explaining the purpose of each (especiallysquad
andnode_id
) could help new contributors understand how the node is orchestrated in tests.
49-65
: Callback chaining for config-based spawn logic.
spawn_p2pool_node
simply delegates tospawn_p2pool_node_with_config
. This separation is clean, but if differences in config logic grow, ensure to keep the function responsibilities distinct and well-documented.
68-279
: Potential improvements to error handling and performance.
- The code panics immediately in several places (e.g., directory creation, building the binary). Consider returning a
Result
or usinganyhow::Context
to propagate errors more gracefully, especially for CI usage.- The build step inside the test might be done repeatedly if multiple nodes are spawned. Consider building once if performance is critical for your test suite.
281-317
: Add a short sleep or early stop in the port-closure loop to prevent busy waiting.The
kill()
method checks whether the ports are freed, but it retries up to 20 times without a delay. This can cause unnecessary CPU usage while spinning. Inserting a short sleep or breaking early once both ports are confirmed closed would be gentler on test resources.
319-432
: Command-line argument builder.Converting typed arguments to strings is straightforward here. However, if the CLI evolves, consider leveraging Clap’s subcommand generation or typed argument parsing for more robust validation and clearer maintainability, especially once more flags or dynamic checks are introduced.
integration_tests/tests/features/Sync.feature (1)
11-17
: Consider using dynamic wait conditions instead of fixed wait times.Using fixed wait times (10 seconds) can make tests flaky, especially in different environments or under load. Consider:
- Using dynamic wait conditions that check for actual synchronization.
- Using configurable wait times from environment variables or constants.
Example approach using a dynamic wait condition:
When I wait until NODE_B is synchronized with NODE_A And I add 10 blocks to p2pool node NODE_A When I wait until NODE_C is synchronized with NODE_Aintegration_tests/tests/steps/p2pool_steps.rs (1)
9-13
: Add debug logging for consistency.For consistency with base_node_steps.rs, consider adding debug logging to track p2pool operations.
async fn start_p2pool_seed_node(world: &mut TariWorld, p2pool_name: String, squad: String, base_node_name: String) { + debug!(target: LOG_TARGET, "start_p2pool_seed_node: spawning '{}' in squad '{}' connected to '{}'", p2pool_name, squad, base_node_name); spawn_p2pool_node(world, true, p2pool_name.clone(), Some(squad), base_node_name).await; + debug!(target: LOG_TARGET, "start_p2pool_seed_node: spawned '{}'", p2pool_name); } async fn start_p2pool_node(world: &mut TariWorld, p2pool_name: String, squad: String, base_node_name: String) { + debug!(target: LOG_TARGET, "start_p2pool_node: spawning '{}' in squad '{}' connected to '{}'", p2pool_name, squad, base_node_name); spawn_p2pool_node(world, false, p2pool_name.clone(), Some(squad), base_node_name).await; + debug!(target: LOG_TARGET, "start_p2pool_node: spawned '{}'", p2pool_name); } async fn add_blocks_to_p2pool_node(world: &mut TariWorld, number_of_blocks: u64, p2pool_name: String) { + debug!(target: LOG_TARGET, "add_blocks_to_p2pool_node: adding {} blocks to '{}'", number_of_blocks, p2pool_name); mine_and_submit_tari_blocks(world, number_of_blocks, p2pool_name.clone()).await; + debug!(target: LOG_TARGET, "add_blocks_to_p2pool_node: added {} blocks to '{}'", number_of_blocks, p2pool_name); }Also applies to: 15-19, 21-25
integration_tests/src/lib.rs (1)
39-39
: Extract magic numbers into constants.The timeout values should be defined as constants for better maintainability.
+ const MAX_TRIES: u32 = 240; // 4 * 60 + const RETRY_DELAY_MS: u64 = 250; - let max_tries = 4 * 60; + let max_tries = MAX_TRIES;integration_tests/src/miner.rs (1)
77-88
: Extract magic number into a constant.The maximum number of tries should be defined as a constant for better maintainability.
+const MAX_NONCE_TRIES: u64 = 10_000_000; pub fn find_sha3x_header_with_achieved_difficulty(header: &mut BlockHeader, achieved_difficulty: Difficulty) { let mut num_tries = 0; while sha3x_difficulty(header).unwrap() != achieved_difficulty { header.nonce += 1; num_tries += 1; - if num_tries > 10_000_000 { + if num_tries > MAX_NONCE_TRIES { // Just in case we burn a hole in the CI server panic!("Could not find a nonce for achieved difficulty in time"); } } }integration_tests/tests/cucumber.rs (3)
35-46
: Consider the visibility and completeness of _flush_stdout.The function has two issues:
- The underscore prefix suggests it's private, but it's declared as public.
- The commented-out
flush()
call might be necessary for immediate output.Apply this diff to address both issues:
-fn _flush_stdout(buffer: &Arc<Mutex<Vec<u8>>>) { +fn flush_stdout_immediate(buffer: &Arc<Mutex<Vec<u8>>>) { let mut buffer = buffer.lock().unwrap(); io::stdout().write_all(&buffer).unwrap(); - // io::stdout().flush().unwrap(); + io::stdout().flush().unwrap(); info!( target: LOG_TARGET_STDOUT, "{}", str::from_utf8(&buffer).unwrap() ); buffer.clear(); }
48-57
: Remove redundant network setup error handling.The network setup is performed twice, with the second check being redundant.
Apply this diff to remove the redundancy:
- std::env::set_var("TARI_TARGET_NETWORK", "localnet"); - std::env::set_var("TARI_NETWORK", "localnet"); - set_network_if_choice_valid(Network::LocalNet).unwrap(); - - if let Err(err) = set_network_if_choice_valid(Network::LocalNet) { - let msg = format!("Error setting network: {}", err); - error!(target: LOG_TARGET, "{}", msg); - panic!("{}", msg); - } + std::env::set_var("TARI_TARGET_NETWORK", "localnet"); + std::env::set_var("TARI_NETWORK", "localnet"); + if let Err(err) = set_network_if_choice_valid(Network::LocalNet) { + let msg = format!("Error setting network: {}", err); + error!(target: LOG_TARGET, "{}", msg); + panic!("{}", msg); + }
77-81
: Resolve commented-out stdout buffer handling.There are commented-out lines related to stdout buffer handling. Either implement the functionality or remove the comments to maintain code clarity.
If the stdout buffer handling is needed, apply this diff:
.after(move |_feature, _rule, scenario, ev, maybe_world| { - // let stdout_buffer = stdout_buffer_clone.clone(); + let stdout_buffer = stdout_buffer.clone(); Box::pin(async move { - // flush_stdout(&stdout_buffer); + flush_stdout(&stdout_buffer);p2pool/src/cli/commands/util.rs (1)
227-242
: Improve file waiting mechanism.The current implementation uses busy waiting with a fixed sleep duration, which is not optimal. Consider using a backoff strategy or filesystem notifications.
Here's an example implementation using exponential backoff:
pub fn read_from_file(path: &Path, timeout: Duration) -> io::Result<Self> { let start = Instant::now(); let mut wait_time = Duration::from_millis(100); while !path.exists() { if start.elapsed() >= timeout { return Err(io::Error::new( io::ErrorKind::TimedOut, format!("Timeout waiting for '{}' to be created", path.display()), )); } sleep(wait_time); wait_time = min(wait_time * 2, Duration::from_secs(1)); } let file = File::open(path)?; let reader = BufReader::new(file); let libp2p_info = serde_json::from_reader(reader)?; Ok(libp2p_info) }integration_tests/src/base_node_process.rs (2)
88-215
: Consider breaking down the long function.The
spawn_base_node_with_config
function is quite long and handles multiple responsibilities.Consider breaking it down into smaller functions:
- Port allocation and identity setup
- Configuration setup
- Process spawning
-async fn spawn_base_node_with_config( +async fn setup_base_node_ports_and_identity( + world: &mut TariWorld, + bn_name: String, +) -> (u16, u16, PathBuf, NodeIdentity, BaseNodeConfig) { + let port: u16; + let grpc_port: u16; + let temp_dir_path: PathBuf; + let base_node_identity: NodeIdentity; + let base_node_config: BaseNodeConfig; + + if let Some(node_ps) = world.base_nodes.get(&bn_name) { + port = node_ps.port; + grpc_port = node_ps.grpc_port; + temp_dir_path = node_ps.temp_dir_path.clone(); + base_node_config = node_ps.config.clone(); + base_node_identity = node_ps.identity.clone(); + } else { + port = get_port(18000..18499).unwrap(); + grpc_port = get_port(18500..18999).unwrap(); + temp_dir_path = world + .current_base_dir + .as_ref() + .expect("Base dir on world") + .join("base_nodes") + .join(format!("{}_grpc_port_{}", bn_name.clone(), grpc_port)); + + let base_node_address = Multiaddr::from_str(&format!("/ip4/127.0.0.1/tcp/{}", port)).unwrap(); + base_node_identity = NodeIdentity::random(&mut OsRng, base_node_address, PeerFeatures::COMMUNICATION_NODE); + save_as_json(temp_dir_path.join("base_node.json"), &base_node_identity).unwrap(); + } + + (port, grpc_port, temp_dir_path, base_node_identity, base_node_config) +} + +async fn setup_base_node_config( + temp_dir_path: PathBuf, + port: u16, + grpc_port: u16, + peer_addresses: Vec<Multiaddr>, + mut base_node_config: BaseNodeConfig, +) -> minotari_node::ApplicationConfig { + let mut common_config = CommonConfig::default(); + common_config.base_path = temp_dir_path.clone(); + + let mut config = minotari_node::ApplicationConfig { + common: common_config, + auto_update: AutoUpdateConfig::default(), + base_node: base_node_config, + peer_seeds: PeerSeedsConfig { + peer_seeds: peer_addresses.into(), + dns_seeds_use_dnssec: false, + ..Default::default() + }, + }; + + config.base_node.network = Network::LocalNet; + config.base_node.grpc_enabled = true; + config.base_node.grpc_address = Some(format!("/ip4/127.0.0.1/tcp/{}", grpc_port).parse().unwrap()); + config.base_node.report_grpc_error = true; + config.base_node.metadata_auto_ping_interval = Duration::from_secs(15); + + config.base_node.data_dir = temp_dir_path.to_path_buf(); + config.base_node.identity_file = PathBuf::from("base_node_id.json"); + config.base_node.tor_identity_file = PathBuf::from("base_node_tor_id.json"); + config.base_node.max_randomx_vms = 1; + + config.base_node.lmdb_path = temp_dir_path.to_path_buf(); + config.base_node.p2p.transport.transport_type = TransportType::Tcp; + config.base_node.p2p.transport.tcp.listener_address = + format!("/ip4/127.0.0.1/tcp/{}", port).parse().unwrap(); + config.base_node.p2p.public_addresses = MultiaddrList::from(vec![config + .base_node + .p2p + .transport + .tcp + .listener_address + .clone()]); + config.base_node.p2p.dht = DhtConfig::default_local_test(); + config.base_node.p2p.dht.database_url = DbConnectionUrl::file(format!("{}-dht.sqlite", port)); + config.base_node.p2p.dht.network_discovery.enabled = true; + config.base_node.p2p.allow_test_addresses = true; + config.base_node.storage.orphan_storage_capacity = 10; + if config.base_node.storage.pruning_horizon != 0 { + config.base_node.storage.pruning_interval = 1; + }; + config.base_node.grpc_server_allow_methods = GrpcMethod::ALL_VARIANTS.to_vec().into(); + + config.base_node.set_base_path(temp_dir_path.clone()); + config +} + +async fn spawn_base_node_with_config( world: &mut TariWorld, is_seed_node: bool, bn_name: String, peers: Vec<String>, mut base_node_config: BaseNodeConfig, ) { - let port: u16; - let grpc_port: u16; - let temp_dir_path: PathBuf; - let base_node_identity: NodeIdentity; - - if let Some(node_ps) = world.base_nodes.get(&bn_name) { - port = node_ps.port; - grpc_port = node_ps.grpc_port; - temp_dir_path = node_ps.temp_dir_path.clone(); - base_node_config = node_ps.config.clone(); - - base_node_identity = node_ps.identity.clone(); - } else { - // each spawned base node will use different ports - port = get_port(18000..18499).unwrap(); - grpc_port = get_port(18500..18999).unwrap(); - // create a new temporary directory - temp_dir_path = world - .current_base_dir - .as_ref() - .expect("Base dir on world") - .join("base_nodes") - .join(format!("{}_grpc_port_{}", bn_name.clone(), grpc_port)); - - let base_node_address = Multiaddr::from_str(&format!("/ip4/127.0.0.1/tcp/{}", port)).unwrap(); - base_node_identity = NodeIdentity::random(&mut OsRng, base_node_address, PeerFeatures::COMMUNICATION_NODE); - save_as_json(temp_dir_path.join("base_node.json"), &base_node_identity).unwrap(); - }; + let (port, grpc_port, temp_dir_path, base_node_identity, base_node_config) = + setup_base_node_ports_and_identity(world, bn_name.clone()).await; println!("Base node identity: {}", base_node_identity); let identity = base_node_identity.clone(); let shutdown = Shutdown::new(); let process = BaseNodeProcess { name: bn_name.clone(), port, grpc_port, identity, temp_dir_path: temp_dir_path.clone(), is_seed_node, seed_nodes: peers.clone(), config: base_node_config.clone(), kill_signal: shutdown.clone(), }; let name_cloned = bn_name.clone(); let peer_addresses = get_base_node_peer_addresses(world, &peers).await; - let mut common_config = CommonConfig::default(); - common_config.base_path = temp_dir_path.clone(); task::spawn(async move { - let mut base_node_config = minotari_node::ApplicationConfig { - common: common_config, - auto_update: AutoUpdateConfig::default(), - base_node: base_node_config, - peer_seeds: PeerSeedsConfig { - peer_seeds: peer_addresses.into(), - dns_seeds_use_dnssec: false, - ..Default::default() - }, - }; - - println!("Using base_node temp_dir: {}", temp_dir_path.clone().display()); - base_node_config.base_node.network = Network::LocalNet; - base_node_config.base_node.grpc_enabled = true; - base_node_config.base_node.grpc_address = Some(format!("/ip4/127.0.0.1/tcp/{}", grpc_port).parse().unwrap()); - base_node_config.base_node.report_grpc_error = true; - base_node_config.base_node.metadata_auto_ping_interval = Duration::from_secs(15); - - base_node_config.base_node.data_dir = temp_dir_path.to_path_buf(); - base_node_config.base_node.identity_file = PathBuf::from("base_node_id.json"); - base_node_config.base_node.tor_identity_file = PathBuf::from("base_node_tor_id.json"); - base_node_config.base_node.max_randomx_vms = 1; - - base_node_config.base_node.lmdb_path = temp_dir_path.to_path_buf(); - base_node_config.base_node.p2p.transport.transport_type = TransportType::Tcp; - base_node_config.base_node.p2p.transport.tcp.listener_address = - format!("/ip4/127.0.0.1/tcp/{}", port).parse().unwrap(); - base_node_config.base_node.p2p.public_addresses = MultiaddrList::from(vec![base_node_config - .base_node - .p2p - .transport - .tcp - .listener_address - .clone()]); - base_node_config.base_node.p2p.dht = DhtConfig::default_local_test(); - base_node_config.base_node.p2p.dht.database_url = DbConnectionUrl::file(format!("{}-dht.sqlite", port)); - base_node_config.base_node.p2p.dht.network_discovery.enabled = true; - base_node_config.base_node.p2p.allow_test_addresses = true; - base_node_config.base_node.storage.orphan_storage_capacity = 10; - if base_node_config.base_node.storage.pruning_horizon != 0 { - base_node_config.base_node.storage.pruning_interval = 1; - }; - base_node_config.base_node.grpc_server_allow_methods = GrpcMethod::ALL_VARIANTS.to_vec().into(); - - // Hierarchically set the base path for all configs - base_node_config.base_node.set_base_path(temp_dir_path.clone()); + let config = setup_base_node_config( + temp_dir_path.clone(), + port, + grpc_port, + peer_addresses, + base_node_config, + ).await; let msg = format!( "Initializing base node: '{}', port: '{}', grpc_port: '{}', is_seed_node: '{}'", name_cloned, port, grpc_port, is_seed_node ); debug!(target: LOG_TARGET, "{}", msg); println!("{}", msg); - let result = run_base_node(shutdown, Arc::new(base_node_identity), Arc::new(base_node_config)).await; + let result = run_base_node(shutdown, Arc::new(base_node_identity), Arc::new(config)).await; if let Err(e) = result { panic!("{:?}", e); } });
217-237
: Consider improving port availability check.The
kill
method's port availability check could be improved to avoid potential infinite loops.Add a timeout and max retries:
- pub fn kill(&mut self) { + pub fn kill(&mut self) { + const MAX_RETRIES: u32 = 30; + const RETRY_DELAY: Duration = Duration::from_millis(100); + self.kill_signal.trigger(); - loop { - // lets wait till the port is cleared - if TcpListener::bind(("127.0.0.1", self.port)).is_ok() { - break; - } - } - loop { - // lets wait till the port is cleared - if TcpListener::bind(("127.0.0.1", self.grpc_port)).is_ok() { - break; - } - } + + for _ in 0..MAX_RETRIES { + if TcpListener::bind(("127.0.0.1", self.port)).is_ok() && + TcpListener::bind(("127.0.0.1", self.grpc_port)).is_ok() { + return; + } + std::thread::sleep(RETRY_DELAY); + } + + log::warn!( + target: LOG_TARGET, + "Ports {} and {} not freed after {} retries", + self.port, + self.grpc_port, + MAX_RETRIES + ); }integration_tests/.husky/_/husky.sh (3)
7-8
: Fix potential return value masking.The current assignment might mask return values.
Split the declaration and assignment:
- readonly hook_name="$(basename "$0")" + readonly hook_name + hook_name="$(basename "$0")"🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
15-18
: Add shellcheck directive for non-constant source.The script sources a non-constant file which shellcheck can't verify.
Add a shellcheck directive to specify the source:
+ # shellcheck source=/dev/null if [ -f ~/.huskyrc ]; then debug "sourcing ~/.huskyrc" . ~/.huskyrc fi
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
20-20
: Fix incorrect export with readonly.The current syntax incorrectly applies export to the 'readonly' keyword.
Use the correct syntax:
- export readonly husky_skip_init=1 + readonly husky_skip_init=1 + export husky_skip_init🧰 Tools
🪛 Shellcheck (0.10.0)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate
declare
options instead.(SC2316)
integration_tests/README.md (3)
7-15
: Placeholders for Test Names Need Updating
Multiple examples (lines 11 and 14) use"?????????"
as a placeholder for test names. Consider replacing these placeholders with realistic examples or clearer instructions on how to determine an appropriate test name to enhance the documentation's clarity.
30-41
: Input Glob Placeholders Need Clarification
Similar to the test name examples, the input glob examples (lines 34, 37, and 40) show"?????????"
as placeholders. Please update them with concrete glob patterns or guidance to improve usability of the instructions.🧰 Tools
🪛 LanguageTool
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
43-48
: Documentation Notes Section Clarity
The "Notes" section clearly explains the current state of the test suite. For improved readability, consider varying sentence structures to avoid repetitive phrasing.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (30)
.github/workflows/audit.yml
(1 hunks)cargo.toml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/cargo.toml
(1 hunks)integration_tests/log4rs/cucumber.yml
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/features/Sync.feature
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)log4rs_sample.yml
(1 hunks)p2pool/Cargo.toml
(2 hunks)p2pool/src/cli/args.rs
(4 hunks)p2pool/src/cli/commands/mod.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(5 hunks)p2pool/src/cli/mod.rs
(1 hunks)p2pool/src/lib.rs
(1 hunks)p2pool/src/main.rs
(1 hunks)p2pool/src/server/config.rs
(1 hunks)p2pool/src/server/p2p/network.rs
(4 hunks)p2pool/src/sharechain/lmdb_block_storage.rs
(1 hunks)p2pool/src/sharechain/p2block.rs
(1 hunks)p2pool/src/sharechain/p2chain_level.rs
(1 hunks)src/cli/mod.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- src/cli/mod.rs
✅ Files skipped from review due to trivial changes (1)
- p2pool/src/server/config.rs
🧰 Additional context used
🪛 LanguageTool
integration_tests/README.md
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate declare
options instead.
(SC2316)
🔇 Additional comments (40)
p2pool/src/main.rs (1)
13-13
: Confirm the updated import path.This new import pulls
Cli
fromsha_p2pool
, replacing the old in-housecli
module. Ensure that all references toCli
within this file (and elsewhere) align with the new module structure and functionality provided bysha_p2pool
.integration_tests/src/p2pool_process.rs (2)
25-27
: Good use of constants for better maintainability.Defining
LOG_TARGET
andLIBP2P_INFO_FILE
as constants here improves code readability and helps avoid string duplications in logging and file paths.
43-47
: Automatic resource cleanup on drop.Invoking
kill()
in theDrop
implementation ensures the child process is terminated whenP2PoolProcess
goes out of scope. This is a clean approach to avoid orphaned processes in integration tests.p2pool/src/cli/mod.rs (1)
4-9
: Nicely structured CLI exports.Organizing
args
,commands
, andutil
into separate modules and re-exporting them here makes your CLI code more modular. This approach improves maintainability and discoverability of the core CLI functionality.p2pool/src/lib.rs (1)
1-14
: LGTM!The module organization and exports are well-structured, following Rust's module system conventions.
integration_tests/tests/steps/mod.rs (2)
12-14
: LGTM!The constants are well-named and have clear purposes.
16-19
: LGTM!The
wait_seconds
function is a simple and correct implementation using tokio's sleep.integration_tests/tests/cucumber.rs (1)
3-3
:✅ Verification successful
Verify the stability of the internal_output_capture feature.
The code uses an unstable feature
internal_output_capture
. This might cause issues when upgrading Rust versions.
🌐 Web query:
What is the status of the internal_output_capture feature in Rust? Is it stable?
💡 Result:
The
internal_output_capture
feature in Rust is unstable and explicitly marked as internal to the Rust compiler/test infrastructure. Here are key details:
Current Status
- Tagged with
#[unstable(feature = "internal_output_capture")]
in the standard library code[1][3]- Documented in the Rust Unstable Book as an internal compiler feature not intended for general use[4]
Purpose
Used internally by Rust's test framework to capture output fromprint!
macros and panics during tests[1][3][6]. The implementation uses thread-local storage to intercept output[1].Stability
- Explicitly declared as unstable with no stabilization timeline[4]
- Implementation warns it "may disappear in the future"[1][3]
- Requires
#![feature(internal_output_capture)]
to use (nightly-only)[4]Key Restrictions
#[unstable( feature = "internal_output_capture", reason = "meant for use in the test crate", issue = "none" // Indicates no public tracking issue )]The empty
issue
field confirms this isn't intended for external stabilization[1][3].For production code, use standard I/O methods like
std::io::stdout()
instead[2][3]. This feature exists solely to support Rust's internal testing infrastructure and shouldn't be used directly by developers.Citations:
- 1: https://doc.rust-lang.org/src/std/io/stdio.rs.html
- 2: https://github.com/rust-lang/rust/blob/master/library/std/src/io/mod.rs
- 3: https://github.com/rust-lang/rust/blob/master/library/std/src/io/stdio.rs
- 4: https://doc.rust-lang.org/beta/unstable-book/library-features/internal-output-capture.html
- 5: https://doc.rust-lang.org/beta/unstable-book/
- 6: https://doc.rust-lang.org/beta/unstable-book/language-features/coroutines.html
- 7: https://doc.rust-lang.org/src/test/lib.rs.html
- 8: https://doc.rust-lang.org/beta/unstable-book/language-features/custom-test-frameworks.html
- 9: https://fuchsia.googlesource.com/third_party/rust/+/7bdad89f950d374c128edffacd9872f7dbc72f78%5E!/
- 10: https://github.com/rust-lang/rust/blob/master/library/std/src/lib.rs
Attention: Unstable Feature Usage – Proceed with Caution
The
#![feature(internal_output_capture)]
flag in use is confirmed to be unstable and strictly internal to Rust’s test infrastructure. As detailed by Rust’s documentation, this feature remains unstable with no stabilization timeline, meaning its behavior could change or be removed in future Rust releases. While its use in the test code appears intentional and appropriate for capturing output within the framework, you should be mindful of potential breakage when upgrading Rust versions. Consider monitoring its status closely or evaluating stable alternatives if this risk becomes critical.p2pool/src/cli/args.rs (2)
141-143
: LGTM! Well-documented new field.The new
export_libp2p_info
field is properly documented and follows the existing pattern of optional PathBuf arguments.
216-231
: LGTM! Good separation of concerns.The
run_with_cli
function properly extracts the command handling logic, making the code more modular and easier to test.p2pool/src/cli/commands/util.rs (1)
208-212
: LGTM! Well-structured data type.The
LibP2pInfo
struct is properly designed with appropriate serialization support.p2pool/src/sharechain/p2chain_level.rs (1)
37-48
: LGTM! The visibility change enables integration testing.Making
P2BlockHeader
public is appropriate as it allows the integration tests to access and validate block header information.integration_tests/src/base_node_process.rs (4)
51-61
: LGTM! Well-structured process management struct.The
BaseNodeProcess
struct effectively encapsulates all necessary fields for managing a base node process in integration tests.
63-67
: LGTM! Proper resource cleanup.The
Drop
trait implementation ensures that the process is properly terminated when the struct is dropped.
70-81
: LGTM! Clear debug representation.The
Debug
implementation provides a clear representation of the process state while skipping non-debug fields.
83-85
: LGTM! Simple default configuration wrapper.The
spawn_base_node
function provides a convenient way to spawn a base node with default configuration.p2pool/src/sharechain/p2block.rs (1)
40-60
: LGTM! The visibility change enables integration testing.Making
P2Block
public is appropriate as it allows the integration tests to create and validate blocks.p2pool/src/sharechain/lmdb_block_storage.rs (1)
46-48
: LGTM! The visibility change enables integration testing.Making
LmdbBlockStorage
public is appropriate as it allows the integration tests to manage block storage.p2pool/src/server/p2p/network.rs (3)
991-992
: LGTM! Improved log clarity for seed peer disconnection.The log level has been appropriately changed from
warn
toinfo
for seed peer disconnection messages, as this is expected behavior rather than a warning condition.Also applies to: 1087-1088
1082-1082
: LGTM! Appropriate log level for peer info.Changed log level from
info
todebug
for "no peers added" message, which is more appropriate for this type of diagnostic information.
113-113
:❓ Verification inconclusive
Verify the impact of making Config public.
The Config struct's visibility has been changed from
pub(crate)
topub
. While this change enables external configuration, ensure that all fields can be safely exposed outside the crate.Run this script to check for any sensitive fields that shouldn't be public:
🏁 Script executed:
#!/bin/bash # Description: Check for any sensitive fields in Config that shouldn't be public rg -A 30 'pub struct Config'Length of output: 7395
Action Required: Verify External Exposure of Sensitive Config Fields
The change from
pub(crate)
topub
in theConfig
struct defined inp2pool/src/server/p2p/network.rs
makes all configuration fields accessible externally. While most fields (e.g., network timeouts, ports, and service flags) appear safe to expose, please double-check that the following fields do not introduce security risks when public:
private_key_folder
– Confirm that exposing the path does not leak sensitive directory structures.private_key
– Verify that allowing external access (even as anOption<Keypair>
) does not permit unintended manipulation or disclosure of cryptographic key material.If these fields are meant to be externally configurable, ensure proper documentation and, if necessary, implement additional safeguards (such as validation or masking) to mitigate potential risks.
cargo.toml (3)
1-7
: LGTM! Well-structured workspace configuration.The workspace setup with
p2pool
andintegration_tests
members provides good organization for managing related packages.
8-9
: LGTM! Future-proofing with resolver version 2.Setting
resolver = "2"
prepares for edition 2021 compatibility.
11-14
: LGTM! Enhanced safety with overflow checks.Enabling overflow checks in release mode is a good security practice to catch potential integer overflow issues.
integration_tests/.husky/_/husky.sh (1)
1-31
: LGTM! Well-structured hook script.The script implements good practices:
- Proper error handling with exit codes
- Debug logging capability
- Configuration file support
- Skip mechanism for initialization
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate
declare
options instead.(SC2316)
integration_tests/README.md (2)
17-28
: Clear Example for Running Tagged Tests
The instructions to run tests with specific tags are clear and provide both full and short option examples. No changes needed here, but keep the examples up to date if the tag naming conventions evolve.
49-52
: CI Instructions Are Informative
The CI section effectively outlines when the different groups of tests will run (PRs, nightly, and weekly). The details are clear and actionable.p2pool/Cargo.toml (2)
3-7
: Enhanced Package Metadata
The addition ofauthors
,repository
,license
, and the update toedition
significantly improve the package's transparency and maintainability. Please verify that the version (0.22.2
) is aligned with the intended release strategy.
71-80
: Clear Library and Binary Definitions
The new[lib]
and[[bin]]
sections clearly establish the library target and a corresponding binary. This explicit definition helps streamline the build process.integration_tests/log4rs/cucumber.yml (4)
1-20
: Well-Structured Appender Configuration
The configuration for thestdout
andcucumber
appenders (lines 1–20) is solid. The rolling file policies, file paths, and encoding patterns are all appropriately defined.
21-68
: Consistent Configuration for Additional Appenders
The settings for thebase_node
andother
appenders follow a consistent pattern with clear file paths and size triggers. If any of these appenders have unique considerations, adding inline comments could be helpful for future maintainers.
69-79
: Root Logger Configuration Clarity
The root logger is set at thedebug
level and uses thestdout
appender, which is beneficial for debugging during tests. The commented-out block offers useful context for alternative configurations.
80-118
: Comprehensive Logger Definitions
The logger definitions for components such asstdout
,cucumber
,comms
,p2p
, andothers
are detailed and provide non-additive configurations where needed. This structure will help in isolating logs for different parts of the system.integration_tests/cargo.toml (3)
1-7
: Integration Tests Package Metadata
The metadata for thesha_p2pool_integration_tests
package is well defined. Note that the version is set to0.22.1
—please confirm if this slight difference from the main package version (0.22.2
) is intentional.
9-30
: Dependency Declarations Are Clear
The dependencies are clearly and meticulously declared with fixed Git revisions where applicable. The relative path forsha_p2pool
(line 26) is set correctly, ensuring the integration tests reference the proper module version.
61-65
: Test Configuration for Cucumber is Well Set Up
The custom test configuration disables the default harness in favor of letting Cucumber manage output. This configuration is appropriate for integration testing purposes.log4rs_sample.yml (4)
1-48
: Appender Configurations in the Sample Log4rs File
The file defines several appenders (such asstdout
,p2pool
,other
, etc.) with custom rolling file policies and threshold filters. This detailed configuration is well organized and should be effective for both console and file logging scenarios.
49-159
: Root Logger and Additional Logger Settings
The root logger is appropriately set toinfo
level with multiple appenders, and logger configurations for different modules (e.g.,tari::p2pool
,tari::p2pool::peer_store
) are comprehensive and consistent.
190-196
: P2P Server Logger Configuration
The new logger fortari::p2pool::server::p2p
is configured at thedebug
level and uses both thep2p
andstdout
appenders. This setup should aid in deep diagnostics of P2P server activities.
197-203
: Message Logging Logger is Correctly Set
The configuration for thetari::p2pool::message_logging
logger is clear and concise, set to debug with the appropriatemessage_logging
appender, ensuring detailed log capture for message flows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (6)
integration_tests/src/p2pool_process.rs (3)
4-12
: Consider consistent use of theanyhow
crate dependency.
It's unconventional to importanyhow
throughcucumber::codegen::anyhow
rather than referencing it directly from theanyhow
crate. Usinganyhow
as a direct dependency may improve clarity, ensure consistent version usage, and avoid potential side effects related to thecucumber
crate implementation.
68-77
: Potential environment configuration conflict.
std::env::set_var("TARI_NETWORK", "localnet");
(line 76) changes the environment variable in-process, possibly affecting concurrency if multiple tests run simultaneously. Consider using unique environment variables or an in-memory configuration for each test to avoid unintended cross-test interference.
90-127
: Extract directory creation into a helper for clarity and error handling.
The block of directory creation and existence checks is correct but somewhat verbose. Consider extracting it into a helper function to streamline code and centralize error handling.integration_tests/tests/cucumber.rs (3)
25-46
: Consolidate output management functions.There are two similar functions for output management with subtle differences:
flush_stdout
only logs to the target_flush_stdout
writes to stdout and logs, but has a commented-out flush callConsider:
- Consolidating these into a single function with a parameter to control the output behavior
- Either removing or implementing the commented-out
flush()
call-fn flush_stdout(buffer: &Arc<Mutex<Vec<u8>>>) { - info!( - target: LOG_TARGET_STDOUT, - "{}", - str::from_utf8(&buffer.lock().unwrap()).unwrap() - ); - buffer.lock().unwrap().clear(); -} - -fn _flush_stdout(buffer: &Arc<Mutex<Vec<u8>>>) { - let mut buffer = buffer.lock().unwrap(); - io::stdout().write_all(&buffer).unwrap(); - // io::stdout().flush().unwrap(); - info!( - target: LOG_TARGET_STDOUT, - "{}", - str::from_utf8(&buffer).unwrap() - ); - buffer.clear(); -} +fn flush_stdout(buffer: &Arc<Mutex<Vec<u8>>>, write_to_stdout: bool) { + let mut buffer = buffer.lock().unwrap(); + let content = str::from_utf8(&buffer).unwrap(); + + if write_to_stdout { + io::stdout().write_all(&buffer).unwrap(); + io::stdout().flush().unwrap(); + } + + info!(target: LOG_TARGET_STDOUT, "{}", content); + buffer.clear(); +}
48-55
: Consider making the network configuration configurable.The network is hardcoded to "localnet". Consider:
- Making this configurable through command-line arguments or environment variables
- Adding documentation about why "localnet" is the default choice
+const DEFAULT_NETWORK: &str = "localnet"; + fn main() { - std::env::set_var("TARI_TARGET_NETWORK", "localnet"); - std::env::set_var("TARI_NETWORK", "localnet"); + let network = std::env::var("TARI_NETWORK").unwrap_or_else(|_| DEFAULT_NETWORK.to_string()); + std::env::set_var("TARI_TARGET_NETWORK", &network); + std::env::set_var("TARI_NETWORK", &network);
57-95
: Enhance build configuration flexibility.The build process has limited configurability. Consider:
- Making the build profile configurable (currently hardcoded to release)
- Adding timeout for the build process
- Documenting the
DO_NOT_REBUILD_SHA_P2POOL
environment variable+const BUILD_TIMEOUT_SECS: u64 = 300; // 5 minutes + match fs::exists(&p2pool_exe) { Ok(exists) => { if exists { println!("Found sha_p2pool executable at: '{}'", p2pool_exe.display()); } else { + let build_profile = std::env::var("SHA_P2POOL_BUILD_PROFILE").unwrap_or("release".to_string()); println!( - "sha_p2pool executable not found at: '{}'. Building in release mode...", + "sha_p2pool executable not found at: '{}'. Building in {} mode...", p2pool_exe.display(), + build_profile ); if std::env::var("DO_NOT_REBUILD_SHA_P2POOL").map_or(true, |val| val != "0") { - let output = Command::new("cargo") + let mut command = Command::new("cargo"); + let child = command .current_dir(PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("..")) .env("TARI_TARGET_NETWORK", "localnet") .arg("build") - .arg("--release") + .arg(format!("--{}", build_profile)) .arg("--bin") .arg("sha_p2pool") - .output() - .expect("Failed to build sha_p2pool in release mode"); - if !output.status.success() { + .spawn() + .expect("Failed to start sha_p2pool build"); + + match tokio::time::timeout( + std::time::Duration::from_secs(BUILD_TIMEOUT_SECS), + child.wait_with_output() + ).await { + Ok(Ok(output)) if output.status.success() => { + println!("Build completed successfully"); + }, + Ok(Ok(output)) => { panic!( - "Failed to build sha_p2pool in release mode: {}", + "Failed to build sha_p2pool in {} mode: {}", + build_profile, String::from_utf8_lossy(&output.stderr) ); - } + }, + Ok(Err(e)) => panic!("Failed to wait for build: {}", e), + Err(_) => panic!("Build timed out after {} seconds", BUILD_TIMEOUT_SECS), + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/features/Sync.feature
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- integration_tests/tests/features/Sync.feature
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: test (esmeralda)
- GitHub Check: machete
🔇 Additional comments (4)
integration_tests/src/p2pool_process.rs (4)
43-47
: Appropriate drop logic with a minor caution.
Automatically killing the process inside theDrop
trait is convenient, but be aware that it can hide errors related to dropped processes in asynchronous contexts. Ensure that potential panics or resource leaks do not occur if thekill
method fails.
157-164
: Gracefully handle emptyseed_peers
.
You properly check whetherseed_peers
is empty to decide whether to assignNone
. This is a good approach. The logic is clear and prevents unnecessary configuration.
274-385
: Confirm command-line argument quoting for special characters.
When constructing shell arguments, consider potential spacing, quoting, or escaping issues for values that include spaces or special characters inargs
. If user-provided strings can contain special characters, ensure proper escaping or quoting to avoid unexpected behavior.
389-400
: Cross-platform path retrieval is well-structured.
Theget_p2pool_exe_path
function handles Windows and non-Windows gracefully by conditionally appending.exe
. This is a robust, cross-platform approach.
7c26af3
to
41cfff4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
p2pool/src/cli/commands/util.rs (1)
165-165
:⚠️ Potential issueFix incorrect squad calculation.
The expression
min(1, config.p2p_service.num_squads)
will always return 1, making the squad calculation incorrect. This effectively ignores the num_squads configuration.Apply this diff:
- (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % min(1, config.p2p_service.num_squads); + (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % config.p2p_service.num_squads;
🧹 Nitpick comments (10)
integration_tests/README.md (2)
11-11
: Add actual examples instead of placeholders.Replace the "?????????" placeholders with actual examples to make the documentation more helpful:
- For
--name
, provide an actual test name example- For
--input
, provide actual file path or glob pattern examplesAlso applies to: 14-14, 34-34, 37-37, 40-40
30-30
: Improve sentence structure for better readability.Consider adding a comma after "files" and varying the sentence structure to avoid repetition:
-To run a specific file or files add `-- --input glob` to the command line +To run a specific file or files, use the `-- --input glob` option🧰 Tools
🪛 LanguageTool
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add-- --input glob
to the command li...(AI_HYDRA_LEO_MISSING_COMMA)
integration_tests/.husky/_/husky.sh (2)
7-7
: Refactor the declaration ofhook_name
The command:
readonly hook_name="$(basename "$0")"may mask return values. It is preferable to separate the assignment from marking the variable as read-only. For example:
- readonly hook_name="$(basename "$0")" + hook_name="$(basename "$0")" + readonly hook_name🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
15-18
: Add a ShellCheck directive for sourcing non-constant filesShellCheck warns about the dynamic sourcing on line 17. To address this, consider adding a directive (e.g.
# shellcheck source=/dev/null
) before the sourcing command so that ShellCheck is informed of your intent:- if [ -f ~/.huskyrc ]; then - debug "sourcing ~/.huskyrc" - . ~/.huskyrc - fi + if [ -f ~/.huskyrc ]; then + debug "sourcing ~/.huskyrc" + # shellcheck source=/dev/null + . ~/.huskyrc + fi🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
p2pool/src/cli/commands/util.rs (1)
208-243
: Consider adding file locking mechanism.The
LibP2pInfo
struct has good file operations, but concurrent access to the file could lead to race conditions.Consider adding file locking:
+use std::fs::OpenOptions; +use fs2::FileExt; impl LibP2pInfo { pub fn save_to_file(&self, path: &Path) -> io::Result<()> { if let Some(parent) = path.parent() { if let Ok(false) = fs::exists(parent) { fs::create_dir_all(parent)?; } } - let file = File::create(path)?; + let file = OpenOptions::new() + .write(true) + .create(true) + .open(path)?; + file.lock_exclusive()?; let writer = BufWriter::new(file); serde_json::to_writer(writer, &self)?; + file.unlock()?; Ok(()) } pub fn read_from_file(path: &Path, timeout: Duration) -> io::Result<Self> { let start = Instant::now(); while !path.exists() { if start.elapsed() >= timeout { return Err(io::Error::new( io::ErrorKind::TimedOut, format!("Timeout waiting for '{}' to be created", path.display()), )); } sleep(Duration::from_millis(100)); } - let file = File::open(path)?; + let file = OpenOptions::new() + .read(true) + .open(path)?; + file.lock_shared()?; let reader = BufReader::new(file); let libp2p_info = serde_json::from_reader(reader)?; + file.unlock()?; Ok(libp2p_info) } }integration_tests/src/base_node_process.rs (2)
88-215
: Consider more robust error handling.
In testing contexts,unwrap()
andpanic!
are typically acceptable for quick detection of failures, but wrapping them in descriptive error messages (or returning a Result) might improve debugging if something goes wrong in environment setup.
222-236
: Potential infinite loop inkill()
.
This loop will block indefinitely if any other process continues to hold the port. Consider adding a small delay or maximum retry attempts so tests don’t hang forever in edge cases.integration_tests/src/p2pool_process.rs (2)
147-154
: Handle missing base node more gracefully
Currently, the code panics if no base node is found (lines 147-153). While this may be acceptable for test code, consider returning an error or otherwise handling this gracefully to avoid halting the entire test suite.
389-400
: Validate existence of the p2pool executable
The function returns a hardcoded path. Consider adding checks to ensure the file exists before attempting to spawn the process, improving error reporting if the executable isn't found.integration_tests/src/miner.rs (1)
77-88
: Evaluate the 10,000,000 iteration mining limit
Looping until a matching difficulty is found up to 10 million times could impact performance and block CI pipelines. Consider lowering the iteration limit or adopting a more efficient mining approach for tests.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (31)
.github/workflows/audit.yml
(1 hunks)cargo.toml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/cargo.toml
(1 hunks)integration_tests/log4rs/cucumber.yml
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/features/Sync.feature
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)log4rs_sample.yml
(1 hunks)p2pool/Cargo.toml
(2 hunks)p2pool/src/cli/args.rs
(4 hunks)p2pool/src/cli/commands/mod.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(5 hunks)p2pool/src/cli/mod.rs
(1 hunks)p2pool/src/lib.rs
(1 hunks)p2pool/src/main.rs
(1 hunks)p2pool/src/server/config.rs
(1 hunks)p2pool/src/server/p2p/network.rs
(27 hunks)p2pool/src/server/p2p/peer_store.rs
(3 hunks)p2pool/src/sharechain/lmdb_block_storage.rs
(1 hunks)p2pool/src/sharechain/p2block.rs
(1 hunks)p2pool/src/sharechain/p2chain_level.rs
(1 hunks)src/cli/mod.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- src/cli/mod.rs
🚧 Files skipped from review as they are similar to previous changes (18)
- p2pool/src/cli/commands/mod.rs
- p2pool/src/sharechain/p2chain_level.rs
- log4rs_sample.yml
- p2pool/src/server/p2p/network.rs
- p2pool/src/cli/mod.rs
- p2pool/Cargo.toml
- p2pool/src/sharechain/p2block.rs
- p2pool/src/server/config.rs
- cargo.toml
- integration_tests/tests/features/Sync.feature
- integration_tests/tests/steps/p2pool_steps.rs
- p2pool/src/sharechain/lmdb_block_storage.rs
- .github/workflows/audit.yml
- integration_tests/tests/steps/base_node_steps.rs
- integration_tests/src/lib.rs
- integration_tests/src/world.rs
- p2pool/src/lib.rs
- integration_tests/tests/steps/mod.rs
🧰 Additional context used
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate declare
options instead.
(SC2316)
🪛 LanguageTool
integration_tests/README.md
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add -- --input glob
to the command li...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test (esmeralda)
🔇 Additional comments (20)
integration_tests/tests/cucumber.rs (3)
3-3
: Document the usage of unstable feature.The code uses the unstable
internal_output_capture
feature which is an internal implementation detail of Rust's test infrastructure. Please add documentation explaining its necessity and potential migration plans.
104-109
: Improve logging configuration error handling and flexibility.The logging configuration can be improved by adding error handling and making paths configurable.
111-161
: Clean up test configuration and improve error handling.The test configuration needs improvements in error handling, documentation, and configuration values.
p2pool/src/server/p2p/peer_store.rs (1)
296-296
: LGTM! Enhanced logging for peer store operations.The added debug logs improve traceability of peer store operations by logging both the attempt to add a peer and the successful addition.
Also applies to: 358-358
integration_tests/log4rs/cucumber.yml (1)
1-119
: LGTM! Well-structured logging configuration.The logging configuration follows best practices:
- Appropriate size limits and rotation policies for log files
- Detailed log patterns with relevant information
- Component-specific log levels
- Clear separation of concerns with different appenders
integration_tests/cargo.toml (3)
1-8
: Package Metadata VerificationThe package block (lines 1–8) correctly defines essential metadata including the package name, authors, repository URL, license, version, and edition. Please ensure that these details remain consistent with the overall project documentation and that the versioning follows your release strategy.
9-60
: Dependency Management and Revision PinningThe dependencies block (lines 9–60) is detailed and pins critical Tari-related libraries to a specific commit (
rev = "53c1d47971994b2ac8795768e4b8a44800ea9988"
). This approach helps with reproducibility. However, please verify periodically that:
- The pinned commit hash remains up-to-date with any important security or functionality updates.
- The local path dependency for
sha_p2pool
(line 26) correctly resolves relative to the integration_tests package location.- All feature flags (e.g., for
libp2p
andcucumber
) are necessary for your testing scenario.Overall, the dependency declarations are precise and follow good practices.
61-64
: Cucumber Test ConfigurationThe test configuration block (lines 61–64) is set up appropriately for Cucumber integration tests by:
- Naming the test target "cucumber" (ensure this exactly matches your test file name).
- Disabling the default harness to allow Cucumber to manage its own output.
This configuration is in line with standard practices for using Cucumber in Rust projects.
p2pool/src/cli/args.rs (3)
141-143
: LGTM! Well-documented feature addition.The new
export_libp2p_info
field is well-documented and its purpose for testing scenarios is clearly explained.
216-231
: Good refactoring for improved modularity.Moving the command handling logic to a separate
run_with_cli
function improves code organization and testability.
15-16
:✅ Verification successful
Verify the impact of increased visibility.
The
StartArgs
struct and itsbase_dir
field are now public. While this improves modularity, ensure this change doesn't expose sensitive information or break encapsulation principles.Also applies to: 19-19
🏁 Script executed:
#!/bin/bash # Search for any direct usage of StartArgs or base_dir outside the cli module rg -A 3 "StartArgs|base_dir" --type rustLength of output: 8078
Public Access Changes Verified – No Further Action Required
The increased visibility for the
StartArgs
struct and itsbase_dir
field has been examined. The search output confirms their use across integration tests and other modules as expected, without exposing sensitive information or breaking encapsulation. This change improves modularity and is appropriately safe.p2pool/src/cli/commands/util.rs (1)
169-177
: LGTM! Good error handling for libp2p info export.The error handling for saving libp2p info is well-implemented with appropriate warning logs.
p2pool/src/main.rs (1)
61-61
: LGTM!The command handling is well-structured with proper async/await, error handling, and shutdown signal integration.
integration_tests/src/base_node_process.rs (5)
1-21
: License block acknowledged.
No issues found here; the standard license text is preserved correctly.
50-61
: Struct definition appears sound.
The fields clearly encapsulate all necessary process information, and naming conventions are consistent with the codebase.
63-67
: Drop implementation is convenient.
Automatically invokingkill()
ensures proper cleanup. Make surekill()
can't hang indefinitely (see further comment onkill()
method).
69-81
: Debug implementation is appropriate.
Excluding the unneeded context makes sense for clarity.
83-85
: Function acts as a clean forwarder.
No issues found; encapsulation of default config here is neat.integration_tests/src/p2pool_process.rs (1)
255-272
: Add a short delay in the port-free check loop
This logic repeatedly attempts to bind ports in a tight loop, potentially causing unnecessary CPU usage. Inserting a small delay (e.g., 100 ms) between attempts relieves pressure on system resources.for _ in 0..20 { + std::thread::sleep(std::time::Duration::from_millis(100)); if !p2p_port_closed && ... }
integration_tests/src/miner.rs (1)
32-60
: Handle errors gracefully in block mining
Multipleunwrap()
calls can lead to unexpected panics if any gRPC request fails. Consider returning aResult
and incorporating proper error handling to maintain test resilience.
a2d83d5
to
d7f7586
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
p2pool/src/cli/commands/util.rs (1)
169-169
:⚠️ Potential issueFix incorrect squad calculation.
The expression
min(1, config.p2p_service.num_squads)
will always return 1, making the squad calculation incorrect. This effectively ignores the num_squads configuration.Apply this diff:
- (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % min(1, config.p2p_service.num_squads); + (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % config.p2p_service.num_squads;
🧹 Nitpick comments (5)
p2pool/src/cli/commands/util.rs (4)
55-59
: Consider using a configuration constant for logging paths.The hardcoded relative paths to logging configuration files make the code less maintainable and more prone to errors if the project structure changes.
Consider extracting these paths to configuration constants:
+ const DETAILED_LOG_CONFIG: &str = "../../../../integration_tests/log4rs/log4rs_detailed.yml"; + const DEFAULT_LOG_CONFIG: &str = "../../../../log4rs_sample.yml"; if std::env::var("PERFORM_DETAIL_LOGGING").is_ok() { - include_str!("../../../../integration_tests/log4rs/log4rs_detailed.yml") + include_str!(DETAILED_LOG_CONFIG) } else { - include_str!("../../../../log4rs_sample.yml") + include_str!(DEFAULT_LOG_CONFIG) }
173-181
: Enhance error handling for LibP2pInfo export.The current error handling only logs a generic error message. Consider providing more context about what specifically failed (file creation, serialization, etc.).
- if let Err(err) = libp2p_info.save_to_file(&path) { - warn!(target: LOG_TARGET, "Failed to save libp2p info to file: '{}'", err); + if let Err(err) = libp2p_info.save_to_file(&path) { + warn!( + target: LOG_TARGET, + "Failed to save libp2p info to file '{}': {}", + path.display(), + match err.kind() { + io::ErrorKind::NotFound => "directory not found", + io::ErrorKind::PermissionDenied => "permission denied", + _ => err.to_string().as_str() + } + ); }
220-224
: Simplify directory creation logic.The directory existence check is unnecessarily complex and can be simplified.
- if let Some(parent) = path.parent() { - if let Ok(false) = fs::exists(parent) { - fs::create_dir_all(parent)?; - } - } + if let Some(parent) = path.parent() { + fs::create_dir_all(parent)?; + }The
create_dir_all
function is idempotent and will not fail if the directory already exists.
231-246
: Improve file waiting mechanism.The current implementation uses a busy wait with
sleep
, which is inefficient. Consider using a more efficient waiting mechanism or filesystem notifications.pub fn read_from_file(path: &Path, timeout: Duration) -> io::Result<Self> { - let start = Instant::now(); - while !path.exists() { - if start.elapsed() >= timeout { - return Err(io::Error::new( - io::ErrorKind::TimedOut, - format!("Timeout waiting for '{}' to be created", path.display()), - )); - } - sleep(Duration::from_millis(100)); - } + let mut attempts = (timeout.as_millis() / 100) as u32; + while attempts > 0 { + if path.exists() { + let file = File::open(path)?; + let reader = BufReader::new(file); + return serde_json::from_reader(reader); + } + sleep(Duration::from_millis(100)); + attempts -= 1; + } + Err(io::Error::new( + io::ErrorKind::TimedOut, + format!("Timeout waiting for '{}' to be created", path.display()), + )) - let file = File::open(path)?; - let reader = BufReader::new(file); - let libp2p_info = serde_json::from_reader(reader)?; - Ok(libp2p_info) }This implementation:
- Uses a fixed number of attempts instead of continuous time checking
- Combines the file reading into the main loop
- Reduces the number of file existence checks
integration_tests/tests/cucumber.rs (1)
48-103
: Improve Initialization Logic and Error HandlingThe initialization code could benefit from better organization and error handling:
- Complex nested match statements for rebuild logic
- Hardcoded environment variable names
- Basic error handling for command execution
Consider refactoring the initialization logic:
+const ENV_NETWORK: &str = "TARI_NETWORK"; +const ENV_TARGET_NETWORK: &str = "TARI_TARGET_NETWORK"; +const ENV_NO_REBUILD: &str = "DO_NOT_REBUILD_SHA_P2POOL"; + +fn should_rebuild_p2pool() -> bool { + match std::env::var(ENV_NO_REBUILD) { + Ok(val) => val.to_uppercase() != "TRUE" && val != "1", + Err(_) => true, + } +} + +fn build_p2pool() -> io::Result<()> { + let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR")).join(".."); + let status = Command::new("cargo") + .current_dir(manifest_dir) + .env(ENV_TARGET_NETWORK, "localnet") + .env("PERFORM_DETAIL_LOGGING", "true") + .args(["build", "--release", "--bin", "sha_p2pool"]) + .status()?; + + if !status.success() { + return Err(io::Error::new( + io::ErrorKind::Other, + "Failed to build sha_p2pool in release mode" + )); + } + Ok(()) +} + fn main() { - std::env::set_var("TARI_TARGET_NETWORK", "localnet"); - std::env::set_var("TARI_NETWORK", "localnet"); + std::env::set_var(ENV_TARGET_NETWORK, "localnet"); + std::env::set_var(ENV_NETWORK, "localnet"); if let Err(err) = set_network_if_choice_valid(Network::LocalNet) { let msg = format!("Error setting network: {}", err); error!(target: LOG_TARGET, "{}", msg); panic!("{}", msg); } let p2pool_exe = get_p2pool_exe_path(); - match fs::exists(&p2pool_exe) { - Ok(exists) => { - if exists { - println!("Found sha_p2pool executable at: '{}'", p2pool_exe.display()); - } else { - println!("sha_p2pool executable not found at: '{}'", p2pool_exe.display()); - } - let rebuild = match std::env::var("DO_NOT_REBUILD_SHA_P2POOL") { - Ok(val) => { - println!("Secondary build environment variable: 'DO_NOT_REBUILD_SHA_P2POOL = {}'", val); - if val == "0" || val.to_uppercase() == "false".to_uppercase() { - true - } else { - false - } - }, - Err(_) => true, - }; - if rebuild { - println!("Building sha_p2pool executable in release mode..."); - let output = Command::new("cargo") - .current_dir(PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("..")) - .env("TARI_TARGET_NETWORK", "localnet") - .env("PERFORM_DETAIL_LOGGING", "true") - .arg("build") - .arg("--release") - .arg("--bin") - .arg("sha_p2pool") - .output() - .expect("Failed to build sha_p2pool in release mode"); - if !output.status.success() { - panic!( - "Failed to build sha_p2pool in release mode: {}", - String::from_utf8_lossy(&output.stderr) - ); - } - } - }, - Err(err) => { - panic!( - "Failed to check if sha_p2pool executable exists at: '{}', error: {}", - p2pool_exe.display(), - err - ); - }, + match fs::exists(&p2pool_exe).map_err(|e| { + format!("Failed to check if sha_p2pool exists at '{}': {}", p2pool_exe.display(), e) + }) { + Ok(true) => println!("Found sha_p2pool at: '{}'", p2pool_exe.display()), + Ok(false) => { + println!("sha_p2pool not found at: '{}'", p2pool_exe.display()); + if should_rebuild_p2pool() { + println!("Building sha_p2pool in release mode..."); + if let Err(e) = build_p2pool() { + panic!("Failed to build sha_p2pool: {}", e); + } + } + } + Err(e) => panic!("{}", e), }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
integration_tests/tests/cucumber.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(5 hunks)p2pool/src/server/grpc/p2pool.rs
(7 hunks)p2pool/src/server/p2p/network.rs
(29 hunks)
✅ Files skipped from review due to trivial changes (1)
- p2pool/src/server/grpc/p2pool.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- p2pool/src/server/p2p/network.rs
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test (esmeralda)
🔇 Additional comments (3)
integration_tests/tests/cucumber.rs (3)
3-4
: Document Usage of Unstable FeatureThe code uses the unstable
internal_output_capture
feature without documentation explaining its necessity or potential migration plans.
105-111
: Improve Logging Configuration Error Handling and Flexibility
112-162
: Clean Up Test Configuration and Improve Error Handling
d7f7586
to
5fa8687
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
integration_tests/.husky/_/husky.sh (1)
20-20
:⚠️ Potential issueFix the export readonly syntax.
The current line incorrectly combines export with readonly.
- export readonly husky_skip_init=1 + readonly husky_skip_init=1 + export husky_skip_init🧰 Tools
🪛 Shellcheck (0.10.0)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate
declare
options instead.(SC2316)
integration_tests/src/lib.rs (2)
16-27
:⚠️ Potential issueAdd timeout to port selection loop.
The infinite loop could potentially hang if no ports are available.
pub fn get_port(range: Range<u16>) -> Option<u16> { let min = range.clone().min().expect("A minimum possible port number"); let max = range.max().expect("A maximum possible port number"); + const MAX_ATTEMPTS: u32 = 100; + let mut attempts = 0; loop { let port = rand::thread_rng().gen_range(min..max); if TcpListener::bind(("127.0.0.1", port)).is_ok() { return Some(port); } + attempts += 1; + if attempts >= MAX_ATTEMPTS { + return None; + } } }
57-75
:⚠️ Potential issueHandle unwrap errors gracefully.
The unwrap calls could panic if the peer strings don't match existing nodes.
pub async fn get_p2pool_node_peer_address(world: &TariWorld, peers: &[String]) -> Vec<String> { peers .iter() - .map(|peer_string| { - let peer = world.p2pool_nodes.get(peer_string.as_str()).unwrap().node_id; - peer.to_string() - }) + .filter_map(|peer_string| { + world.p2pool_nodes.get(peer_string.as_str()) + .map(|node| node.node_id.to_string()) + }) .collect() } pub async fn get_base_node_peer_addresses(world: &TariWorld, peers: &[String]) -> Vec<String> { peers .iter() - .map(|peer_string| { - let peer = world.base_nodes.get(peer_string.as_str()).unwrap().identity.to_peer(); - peer.to_short_string() - }) + .filter_map(|peer_string| { + world.base_nodes.get(peer_string.as_str()) + .map(|node| node.identity.to_peer().to_short_string()) + }) .collect() }
🧹 Nitpick comments (4)
p2pool/src/sharechain/in_memory.rs (1)
132-161
: Consider removing redundant clone calls.The
clone()
calls on theu64
values are unnecessary as they implementCopy
.- .clone(), + , - .clone(), + ,integration_tests/README.md (1)
11-11
: Add concrete examples for test execution commands.Replace the placeholder "?????????" with actual examples to make the documentation more helpful:
- cargo test --release --test cucumber -- --name "?????????" + cargo test --release --test cucumber -- --name "Sync between nodes" - cargo test --release --test cucumber -- -n "?????????" + cargo test --release --test cucumber -- -n "Sync between nodes" - cargo test --release --test cucumber -- --input "?????????" + cargo test --release --test cucumber -- --input "tests/features/Sync.feature" - cargo test --release --test cucumber -- --input "?????????" + cargo test --release --test cucumber -- --input "tests/features/Sync.feature" - cargo test --release --test cucumber -- -i "?????????" + cargo test --release --test cucumber -- -i "tests/features/Sync.feature"Also applies to: 14-14, 34-34, 37-37, 40-40
integration_tests/.husky/_/husky.sh (1)
7-7
: Avoid masking return values in variable assignment.The current line could mask potential errors in the command substitution.
- readonly hook_name="$(basename "$0")" + readonly hook_name + hook_name="$(basename "$0")"🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
integration_tests/src/lib.rs (1)
36-55
: Consider returning Result instead of panicking.The
wait_for_service
function panics if the service doesn't start, which might not be the best error handling strategy for a library function.-pub async fn wait_for_service(port: u16) { +pub async fn wait_for_service(port: u16) -> Result<(), String> { use tokio::net::TcpStream; let max_tries = 4 * 60; let mut attempts = 0; loop { if let Ok(stream) = TcpStream::connect(("127.0.0.1", port)).await { drop(stream); // Explicitly drop the connection - return; + return Ok(()); } if attempts >= max_tries { - panic!("Service on port {} never started", port); + return Err(format!("Service on port {} never started", port)); } tokio::time::sleep(Duration::from_millis(250)).await; attempts += 1; } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (33)
.github/workflows/audit.yml
(1 hunks)cargo.toml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/cargo.toml
(1 hunks)integration_tests/log4rs/cucumber.yml
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/features/Sync.feature
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)p2pool/Cargo.toml
(2 hunks)p2pool/src/cli/args.rs
(4 hunks)p2pool/src/cli/commands/mod.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(6 hunks)p2pool/src/cli/mod.rs
(1 hunks)p2pool/src/lib.rs
(1 hunks)p2pool/src/main.rs
(1 hunks)p2pool/src/server/config.rs
(4 hunks)p2pool/src/server/grpc/p2pool.rs
(7 hunks)p2pool/src/server/p2p/network.rs
(31 hunks)p2pool/src/server/p2p/peer_store.rs
(3 hunks)p2pool/src/server/server.rs
(1 hunks)p2pool/src/sharechain/in_memory.rs
(4 hunks)p2pool/src/sharechain/lmdb_block_storage.rs
(1 hunks)p2pool/src/sharechain/p2block.rs
(1 hunks)p2pool/src/sharechain/p2chain_level.rs
(1 hunks)src/cli/mod.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- src/cli/mod.rs
🚧 Files skipped from review as they are similar to previous changes (22)
- p2pool/src/sharechain/p2chain_level.rs
- p2pool/src/server/grpc/p2pool.rs
- integration_tests/tests/features/Sync.feature
- .github/workflows/audit.yml
- p2pool/src/cli/commands/mod.rs
- p2pool/src/server/p2p/network.rs
- p2pool/Cargo.toml
- p2pool/src/server/p2p/peer_store.rs
- integration_tests/tests/steps/mod.rs
- integration_tests/tests/steps/base_node_steps.rs
- integration_tests/src/miner.rs
- p2pool/src/sharechain/lmdb_block_storage.rs
- cargo.toml
- p2pool/src/cli/mod.rs
- integration_tests/src/base_node_process.rs
- p2pool/src/sharechain/p2block.rs
- integration_tests/src/world.rs
- integration_tests/cargo.toml
- p2pool/src/lib.rs
- integration_tests/src/p2pool_process.rs
- integration_tests/tests/cucumber.rs
- integration_tests/log4rs/cucumber.yml
🧰 Additional context used
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate declare
options instead.
(SC2316)
🪛 LanguageTool
integration_tests/README.md
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: test (esmeralda)
- GitHub Check: machete
🔇 Additional comments (14)
p2pool/src/server/server.rs (1)
162-164
: LGTM! Improved logging clarity.The log messages have been enhanced to provide more detailed information about the network silence delay and its completion.
p2pool/src/main.rs (1)
13-13
: CLI Module Reorganization: Inconsistent Reference Paths DetectedThe updated import in
p2pool/src/main.rs
now usessha_p2pool::Cli
, which is correct under the new module structure. However, our search revealed multiple remaining references tocrate::cli
in these files:
p2pool/src/cli/args.rs
p2pool/src/cli/mod.rs
p2pool/src/cli/commands/start.rs
p2pool/src/cli/commands/list_squads.rs
Please review these files to ensure that all usages of the
Cli
type (and related CLI constructs) are updated to reflect the new organization.p2pool/src/server/config.rs (1)
11-12
: LGTM! Well-structured configuration enhancements.The changes improve the configuration system by:
- Adding
Debug
derive for better debugging capabilities- Adding optional target difficulty fields for both SHA3 and RandomX
- Following the established builder pattern for the new configuration options
Also applies to: 27-28, 83-99
p2pool/src/cli/commands/util.rs (2)
181-181
: Fix incorrect squad calculation.The expression
min(1, config.p2p_service.num_squads)
will always return 1, making the squad calculation incorrect. This effectively ignores the num_squads configuration.Apply this diff:
- (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % min(1, config.p2p_service.num_squads); + (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % config.p2p_service.num_squads;
224-259
: LGTM! Well-implemented LibP2pInfo struct.The implementation includes:
- Proper serialization/deserialization support
- Robust file I/O with parent directory creation
- Well-designed timeout mechanism for file reading
- Appropriate error handling
p2pool/src/cli/args.rs (4)
14-156
: LGTM! The new fields enhance testing capabilities.The new fields added to
StartArgs
are well-documented and serve clear testing purposes:
export_libp2p_info
: Enables file-based handshake for testingnetwork_silence_delay
: Controls network sync timingminimum_sha3_target_difficulty
andminimum_randomx_target_difficulty
: Allow difficulty customization
189-196
: LGTM! Added Debug derive for better debugging support.The addition of
Debug
to the derive attributes improves debugging capabilities.
216-225
: LGTM! Improved code organization through extraction.The
handle_command
method has been refactored to delegate command execution torun_with_cli
, improving modularity.
228-243
: LGTM! Well-structured command handling.The new
run_with_cli
function provides a clean separation of command handling logic with proper error handling.p2pool/src/sharechain/in_memory.rs (3)
51-61
: LGTM! Added configurable difficulty thresholds.The new fields
minimum_randomx_target_difficulty
andminimum_sha3_target_difficulty
allow for customizable minimum difficulty thresholds.
668-671
: LGTM! Updated difficulty calculation to use configurable thresholds.The difficulty calculation now uses the configurable minimum thresholds instead of hardcoded constants.
937-947
: LGTM! Test initialization updated for new fields.The test initialization correctly sets up the new difficulty threshold fields.
integration_tests/README.md (1)
1-52
: Well-structured documentation with clear test execution strategy.The documentation effectively explains how to run the tests with various options and includes a sensible CI strategy:
- Critical tests run on every PR
- Other tests run nightly
- Long-running tests run weekly
🧰 Tools
🪛 LanguageTool
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
integration_tests/tests/steps/p2pool_steps.rs (1)
1-26
: Well-structured cucumber test steps with clear responsibilities.The implementation:
- Effectively reuses steps with both #[given] and #[when] annotations
- Has clear function names that match their responsibilities
- Properly handles async operations
5fa8687
to
d204dcc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (3)
p2pool/src/cli/commands/util.rs (1)
181-181
:⚠️ Potential issueFix incorrect squad calculation.
The expression
min(1, config.p2p_service.num_squads)
will always return 1, making the squad calculation incorrect. This effectively ignores the num_squads configuration.Apply this diff:
- (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % min(1, config.p2p_service.num_squads); + (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % config.p2p_service.num_squads;integration_tests/.husky/_/husky.sh (1)
20-20
:⚠️ Potential issueSeparate
export
andreadonly
to fix ShellCheck SC2316.-export readonly husky_skip_init=1 +readonly husky_skip_init=1 +export husky_skip_init🧰 Tools
🪛 Shellcheck (0.10.0)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate
declare
options instead.(SC2316)
integration_tests/src/miner.rs (1)
26-65
:⚠️ Potential issueHandle errors gracefully in block mining.
Multiple unwrap calls could cause panics. Consider proper error handling.
+type Result<T> = std::result::Result<T, Box<dyn std::error::Error>>; -pub async fn mine_and_submit_tari_blocks(world: &mut TariWorld, number_of_blocks: u64, p2pool_name: String) { +pub async fn mine_and_submit_tari_blocks(world: &mut TariWorld, number_of_blocks: u64, p2pool_name: String) -> Result<()> { debug!(target: LOG_TARGET, "add {} blocks to p2pool node '{}'", number_of_blocks, p2pool_name); let mut p2pool_client = world .get_p2pool_grpc_client(&p2pool_name) .await - .unwrap_or_else(|_| panic!("Couldn't get the p2pool node grpc client '{}'", p2pool_name)); + .map_err(|e| format!("Couldn't get the p2pool node grpc client '{}': {}", p2pool_name, e))?; for i in 0..number_of_blocks { let block = p2pool_client .get_new_block(GetNewBlockRequest { pow: Some(PowAlgo { pow_algo: PowAlgos::Sha3x as i32, }), coinbase_extra: i.to_string(), wallet_payment_address: new_random_dual_tari_address().to_hex(), }) .await - .unwrap(); + .map_err(|e| format!("Failed to get new block: {}", e))?; let block_response = block.into_inner(); let target_difficulty = block_response.target_difficulty; - let block_result = block_response.block.unwrap(); - let grpc_block = block_result.block.unwrap(); - let mut block = Block::try_from(grpc_block).unwrap(); + let block_result = block_response.block.ok_or("Block response missing block")?; + let grpc_block = block_result.block.ok_or("Block result missing block")?; + let mut block = Block::try_from(grpc_block) + .map_err(|e| format!("Failed to convert gRPC block: {}", e))?; debug!(target: LOG_TARGET, "mining block '{}' with target difficulty '{}'", i, target_difficulty); - find_sha3x_header_with_achieved_difficulty(&mut block.header, Difficulty::from_u64(target_difficulty).unwrap()); + let achieved_difficulty = Difficulty::from_u64(target_difficulty) + .map_err(|e| format!("Failed to create difficulty: {}", e))?; + find_sha3x_header_with_achieved_difficulty(&mut block.header, achieved_difficulty); p2pool_client .submit_block(SubmitBlockRequest { - block: Some(GrpcBlock::try_from(block).unwrap()), + block: Some(GrpcBlock::try_from(block) + .map_err(|e| format!("Failed to convert block to gRPC: {}", e))?), wallet_payment_address: new_random_dual_tari_address().to_hex(), }) .await - .unwrap(); + .map_err(|e| format!("Failed to submit block: {}", e))?; } debug!(target: LOG_TARGET, "{} blocks added to p2pool node '{}'", number_of_blocks, p2pool_name); + Ok(()) }
🧹 Nitpick comments (6)
integration_tests/README.md (2)
7-8
: Reword repetitive bullet points to address style concerns.Three bullet points in a row start with “To run…” or “- To run…”. For example, lines 7, 17, and 30. Consider rephrasing to avoid repetition and improve readability.
Below is an example diff for lines 7 and 17:
- - To run a specific test, add `-- --name <Test Name>` to the command line + - Specify a single test by adding `-- --name <Test Name>` to the command line - - To run tests with specific tags, e.g. `critical`, add `-- --tags @<EXPRESSION>` to the command line. + - Filter tests by tags, e.g. `critical`, by adding `-- --tags @<EXPRESSION>` to the command line.Also applies to: 17-18, 30-31
43-52
: Add clarifications for @broken, @long-running, and nightly runs.The notes about these tags and schedule are helpful, but consider explicitly describing:
- The rationale for marking scenarios as
@broken
.- The typical duration or resource usage for
@long-running
tests.This extra detail ensures contributors understand how to categorize tests, leading to better test organization.
integration_tests/.husky/_/husky.sh (1)
7-7
: Avoid declaring and assigning in a single command to prevent masking return values.Use separate
readonly
and assignment statements, as recommended by ShellCheck (SC2155).-readonly hook_name="$(basename "$0")" +readonly hook_name +hook_name="$(basename "$0")"🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
integration_tests/log4rs/cucumber.yml (2)
5-20
: Consider setting a pattern for stdout logs to include timestamps and log levels.The current pattern for stdout only includes the message
{m}
, which may make it difficult to correlate stdout logs with logs from other appenders.encoder: - pattern: "{m}" + pattern: "{d(%Y-%m-%d %H:%M:%S.%f)} {l:5} {m}"
75-79
: Consider redirecting root logger to a file instead of stdout during cucumber tests.Sending debug level logs to stdout might produce excessive output during test runs. Consider redirecting to a separate log file to maintain a cleaner console output.
root: level: debug appenders: - - stdout + - cucumberintegration_tests/src/miner.rs (1)
52-53
: Improve debugging output for block mining.The current debug log only shows the target difficulty, but not the achieved difficulty. Consider adding the achieved difficulty to the log to better understand the mining process.
- debug!(target: LOG_TARGET, "mining block '{}' with target difficulty '{}'", i, target_difficulty); + debug!(target: LOG_TARGET, "mining block '{}' with target difficulty '{}', attempting to find nonce", i, target_difficulty); find_sha3x_header_with_achieved_difficulty(&mut block.header, Difficulty::from_u64(target_difficulty).unwrap()); + debug!(target: LOG_TARGET, "found nonce for block '{}', achieved difficulty '{}'", i, sha3x_difficulty(&block.header).unwrap());
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (33)
.github/workflows/audit.yml
(1 hunks)cargo.toml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/cargo.toml
(1 hunks)integration_tests/log4rs/cucumber.yml
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/features/Sync.feature
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)p2pool/Cargo.toml
(2 hunks)p2pool/src/cli/args.rs
(4 hunks)p2pool/src/cli/commands/mod.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(6 hunks)p2pool/src/cli/mod.rs
(1 hunks)p2pool/src/lib.rs
(1 hunks)p2pool/src/main.rs
(1 hunks)p2pool/src/server/config.rs
(4 hunks)p2pool/src/server/grpc/p2pool.rs
(7 hunks)p2pool/src/server/p2p/network.rs
(32 hunks)p2pool/src/server/p2p/peer_store.rs
(3 hunks)p2pool/src/server/server.rs
(1 hunks)p2pool/src/sharechain/in_memory.rs
(4 hunks)p2pool/src/sharechain/lmdb_block_storage.rs
(1 hunks)p2pool/src/sharechain/p2block.rs
(1 hunks)p2pool/src/sharechain/p2chain_level.rs
(1 hunks)src/cli/mod.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- src/cli/mod.rs
🚧 Files skipped from review as they are similar to previous changes (24)
- p2pool/src/server/server.rs
- cargo.toml
- integration_tests/tests/features/Sync.feature
- p2pool/src/sharechain/p2chain_level.rs
- p2pool/src/cli/commands/mod.rs
- p2pool/src/server/grpc/p2pool.rs
- .github/workflows/audit.yml
- p2pool/src/server/p2p/peer_store.rs
- integration_tests/tests/steps/base_node_steps.rs
- p2pool/Cargo.toml
- p2pool/src/sharechain/lmdb_block_storage.rs
- integration_tests/src/base_node_process.rs
- p2pool/src/sharechain/p2block.rs
- integration_tests/tests/steps/mod.rs
- p2pool/src/server/p2p/network.rs
- integration_tests/tests/cucumber.rs
- p2pool/src/server/config.rs
- p2pool/src/cli/mod.rs
- p2pool/src/lib.rs
- p2pool/src/main.rs
- integration_tests/src/lib.rs
- integration_tests/src/world.rs
- integration_tests/cargo.toml
- p2pool/src/cli/args.rs
🧰 Additional context used
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate declare
options instead.
(SC2316)
🪛 LanguageTool
integration_tests/README.md
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: test (esmeralda)
- GitHub Check: machete
🔇 Additional comments (21)
p2pool/src/sharechain/in_memory.rs (5)
59-60
: Good addition of configurable difficulty settingsAdding these fields allows for more flexible configuration of the minimum target difficulty per algorithm, which improves the adaptability of the system compared to using hardcoded constants.
133-147
: Clean implementation of the configurable difficulty settingsThe implementation properly initializes the new difficulty fields from the config with appropriate fallbacks to the constant defaults. The
config.clone()
is necessary since you're storing the config in the struct now.
148-158
: Helpful debug logging addedThe detailed debug logging will assist with development and troubleshooting. It includes all relevant values that would help diagnose issues with difficulty settings.
667-668
: Properly updated difficulty calculation logicCorrectly switched from hardcoded constants to using the instance fields when calculating minimum difficulty, completing the encapsulation of these configuration values.
943-944
: Tests appropriately updatedThe test implementation has been updated to initialize the new fields, ensuring that tests remain valid after these changes.
p2pool/src/cli/commands/util.rs (8)
4-16
: Enhanced imports for file I/O and timing operations.The added imports from the standard library support new file handling and timing functionalities that are used by the LibP2pInfo struct.
18-20
: Expanded imports with necessary logging and serialization support.Added imports for PeerId, warn logging, and serialization/deserialization capabilities to support the new functionality.
55-59
: Configuration based on environment for detailed logging.Good enhancement to allow for detailed logging when needed through an environment variable, which helps with debugging during integration tests.
65-65
: Added CLI logging for better debugging.The addition of logging the CLI configuration provides valuable debugging information.
89-100
: Added support for additional configuration parameters.The code now handles new command-line arguments for network silence delay and minimum target difficulties, expanding the configurability of the system.
162-166
: Enhanced logging for consensus manager initialization.Improved logging with network and genesis hash information provides better visibility into the system's state during initialization.
185-193
: Added LibP2pInfo export functionality.The code now exports peer ID and squad information to a file when requested, which is useful for integration testing and monitoring. Good error handling with appropriate warning logs.
224-259
: Added LibP2pInfo struct with serialization/deserialization capabilities.This new struct and its methods provide a clean interface for persisting and retrieving peer information. The implementation includes:
- Proper directory creation if needed
- Serialization to JSON
- A timeout mechanism when reading files that may not exist yet
- Comprehensive error handling
The read_from_file method's polling approach with timeout is particularly useful for integration testing when waiting for processes to initialize.
integration_tests/README.md (1)
1-2
: File introduction looks good.No issues found in these lines; they effectively introduce the README for integration tests.
integration_tests/tests/steps/p2pool_steps.rs (1)
1-32
: These step definitions look well-structured.The async steps effectively delegate to the underlying process-spawning and block-adding utilities. No issues found.
integration_tests/src/p2pool_process.rs (2)
269-281
: Add a brief sleep to prevent high CPU usage.Repeatedly attempting to bind ports without any delay can create a busy loop. Insert a minor sleep statement in the loop to avoid excessive CPU usage.
for _ in 0..20 { + std::thread::sleep(std::time::Duration::from_millis(100)); if !p2p_port_closed && TcpListener::bind(...) { ... } if !grpc_port_closed && TcpListener::bind(...) { ... } ... }
77-78
:❓ Verification inconclusive
Verify concurrency implications of setting TARI_NETWORK globally.
Setting
std::env::set_var("TARI_NETWORK", "localnet")
may cause interference if multiple tests run in parallel with different networks.Consider verifying or documenting that only one network is needed at a time. Example approach:
🏁 Script executed:
#!/bin/bash # Attempt to find concurrency usage of TARI_NETWORK env var rg "TARI_NETWORK" -A 5Length of output: 2019
Action: Verify Concurrency Risks for Global TARI_NETWORK Setting
The code in
integration_tests/src/p2pool_process.rs
(lines 77-78) sets theTARI_NETWORK
environment variable globally viastd::env::set_var("TARI_NETWORK", "localnet")
and then validates it withset_network_if_choice_valid(Network::LocalNet).unwrap()
. Our search shows that this variable is also set in other integration tests (e.g., inintegration_tests/tests/cucumber.rs
), which confirms that it’s being used across multiple tests.
- Concurrency Concern: Since the environment variable is defined globally, there is a risk of interference when tests run in parallel under different network configurations.
- Recommendation: Verify that tests are executed in isolation regarding network configuration, or document explicitly that only one network (i.e., "localnet") is supported at any given time during test runs. If concurrent execution with different networks is expected, consider scoping environment settings to each test instance to prevent conflicts.
integration_tests/log4rs/cucumber.yml (3)
2-2
: Good refresh rate configuration.The 30-second refresh rate provides a good balance between timely log updates and system overhead.
69-74
: Clear intention with commented code.The commented-out section provides useful context about the intention to suppress standard output during cucumber tests.
109-118
: Appropriate error-level logging configuration for third-party libraries.Setting the logging level to error for third-party libraries like globset and hickory_proto helps reduce noise in the logs while still capturing important errors.
integration_tests/src/miner.rs (1)
67-77
: Function looks good.The random address generation function is well-implemented and provides a clear interface.
d204dcc
to
20abe0c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (5)
p2pool/src/cli/commands/util.rs (1)
181-181
:⚠️ Potential issueFix incorrect squad calculation.
The expression
min(1, config.p2p_service.num_squads)
will always return 1, making the squad calculation incorrect. This effectively ignores the num_squads configuration.Apply this diff:
- (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % min(1, config.p2p_service.num_squads); + (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % config.p2p_service.num_squads;integration_tests/.husky/_/husky.sh (1)
20-20
:⚠️ Potential issueCorrect the export of a read-only variable.
The current line incorrectly combines
export
withreadonly
and results in ShellCheck error SC2316. Instead, separate the two operations.- export readonly husky_skip_init=1 + readonly husky_skip_init=1 + export husky_skip_init🧰 Tools
🪛 Shellcheck (0.10.0)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate
declare
options instead.(SC2316)
integration_tests/tests/cucumber.rs (3)
3-4
: 🛠️ Refactor suggestionDocument the Usage of
internal_output_capture
.The code uses an unstable Rust feature without explanation. Add a comment explaining why this feature is necessary and any future migration plans.
+// This unstable feature is required for capturing test output in integration tests. +// It's an internal implementation detail of the Rust standard library used for test infrastructure. +// TODO: Monitor for stabilization or alternative approaches in future Rust versions. #![feature(internal_output_capture)]
25-46
: 🛠️ Refactor suggestionConsolidate Output Management Functions.
There's duplicated functionality between
flush_stdout
and_flush_stdout
with multiple unwrap calls and no proper error handling.-fn flush_stdout(buffer: &Arc<Mutex<Vec<u8>>>) { +fn flush_stdout(buffer: &Arc<Mutex<Vec<u8>>>) -> io::Result<()> { + let buffer_guard = buffer.lock().map_err(|e| { + io::Error::new(io::ErrorKind::Other, format!("Failed to lock buffer: {}", e)) + })?; + + let content = str::from_utf8(&buffer_guard) + .map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))?; + info!( target: LOG_TARGET_STDOUT, - "{}", - str::from_utf8(&buffer.lock().unwrap()).unwrap() + "{}", content ); - buffer.lock().unwrap().clear(); -} - -fn _flush_stdout(buffer: &Arc<Mutex<Vec<u8>>>) { - // After each test we flush the stdout to the logs. - let mut buffer = buffer.lock().unwrap(); - io::stdout().write_all(&buffer).unwrap(); - // io::stdout().flush().unwrap(); - info!( - target: LOG_TARGET_STDOUT, - "{}", - str::from_utf8(&buffer).unwrap() - ); - buffer.clear(); + + let mut buffer_guard = buffer.lock().map_err(|e| { + io::Error::new(io::ErrorKind::Other, format!("Failed to lock buffer: {}", e)) + })?; + buffer_guard.clear(); + Ok(()) }
106-112
: 🛠️ Refactor suggestionImprove logging configuration error handling and flexibility.
The logging configuration uses hard-coded paths with minimal error handling for file operations.
+const DEFAULT_LOG_CONFIG_PATH: &str = "log4rs/cucumber.yml"; +const DEFAULT_LOG_CONFIG_DIR: &str = "./"; + +fn setup_logging() -> Result<(), Box<dyn std::error::Error>> { + let config_path = std::env::var("LOG_CONFIG_PATH") + .unwrap_or_else(|_| DEFAULT_LOG_CONFIG_PATH.to_string()); + let config_dir = std::env::var("LOG_CONFIG_DIR") + .unwrap_or_else(|_| DEFAULT_LOG_CONFIG_DIR.to_string()); + + initialize_logging( + &PathBuf::from(config_path), + &PathBuf::from(config_dir), + include_str!("../log4rs/cucumber.yml"), + )?; + Ok(()) +} + -initialize_logging( - &PathBuf::from("log4rs/cucumber.yml"), - &PathBuf::from("./"), - include_str!("../log4rs/cucumber.yml"), -) -.expect("logging not configured"); +setup_logging().expect("Failed to configure logging");
🧹 Nitpick comments (18)
integration_tests/tests/steps/mod.rs (1)
12-14
: Clarify constant naming and units for consistent understanding.The constant names and values could be clearer:
HALF_SECOND
is defined as 500, which is in milliseconds, not secondsTWO_MINUTES_WITH_HALF_SECOND_SLEEP
is unclear about its purposeConsider this improvement:
-pub const CONFIRMATION_PERIOD: u64 = 4; -pub const TWO_MINUTES_WITH_HALF_SECOND_SLEEP: u64 = 240; -pub const HALF_SECOND: u64 = 500; +pub const CONFIRMATION_PERIOD_SECONDS: u64 = 4; +pub const TWO_MINUTES_ITERATIONS: u64 = 240; // Number of half-second iterations to make 2 minutes +pub const HALF_SECOND_MS: u64 = 500; // 500 millisecondsp2pool/src/cli/commands/util.rs (4)
243-258
: Improve file waiting mechanism to reduce CPU usage.The current implementation uses a busy wait loop with a fixed 100ms sleep, which could be inefficient for longer waits. Consider implementing an exponential backoff strategy or at least logging the wait status.
pub fn read_from_file(path: &Path, timeout: Duration) -> io::Result<Self> { let start = Instant::now(); + let mut wait_time = Duration::from_millis(100); + let max_wait = Duration::from_secs(1); while !path.exists() { if start.elapsed() >= timeout { return Err(io::Error::new( io::ErrorKind::TimedOut, format!("Timeout waiting for '{}' to be created", path.display()), )); } - sleep(Duration::from_millis(100)); + sleep(wait_time); + // Exponential backoff with a cap + wait_time = min(wait_time * 2, max_wait); + if start.elapsed() > Duration::from_secs(3) { + warn!(target: LOG_TARGET, "Still waiting for '{}' to be created...", path.display()); + } }
224-228
: Add documentation for the LibP2pInfo struct.The new
LibP2pInfo
struct lacks documentation. Adding doc comments would help developers understand its purpose and usage.+/// Information about a libp2p node that can be serialized and shared +/// Contains the peer ID and squad assignment #[derive(Serialize, Deserialize)] pub struct LibP2pInfo { + /// The peer's unique identifier pub peer_id: PeerId, + /// The squad this peer belongs to pub squad: String, }
255-257
: Improve error handling for JSON deserialization.The current implementation doesn't provide context when JSON parsing fails. Consider adding more specific error information.
let file = File::open(path)?; let reader = BufReader::new(file); - let libp2p_info = serde_json::from_reader(reader)?; + let libp2p_info = serde_json::from_reader(reader) + .map_err(|e| io::Error::new( + io::ErrorKind::InvalidData, + format!("Failed to deserialize LibP2pInfo: {}", e) + ))?; Ok(libp2p_info)
185-193
: Consider adding more context to the failure message.When saving the libp2p info fails, it would be helpful to include the path in the error message for easier debugging.
if let Some(path) = args.export_libp2p_info.clone() { let libp2p_info = LibP2pInfo { peer_id: *swarm.local_peer_id(), squad: squad.clone(), }; if let Err(err) = libp2p_info.save_to_file(&path) { - warn!(target: LOG_TARGET, "Failed to save libp2p info to file: '{}'", err); + warn!(target: LOG_TARGET, "Failed to save libp2p info to file '{}': {}", path.display(), err); } }integration_tests/README.md (3)
5-5
: Add a comma after "form" for better readability.In its simplest form, you can run the tests from the project route with
cargo test --release --test cucumber
-In its simplest form you can run the tests from the project route with `cargo test --release --test cucumber` +In its simplest form, you can run the tests from the project route with `cargo test --release --test cucumber`🧰 Tools
🪛 LanguageTool
[uncategorized] ~5-~5: Possible missing comma found.
Context: ...s ## Procedure to run In its simplest form you can run the tests from the project ...(AI_HYDRA_LEO_MISSING_COMMA)
11-14
: Replace placeholder values with actual examples.The documentation currently uses placeholders ("?????????") instead of real examples, which reduces its usefulness. Consider replacing these with actual test names, tag expressions, and file paths to make the documentation more practical.
- cargo test --release --test cucumber -- --name "?????????" + cargo test --release --test cucumber -- --name "Sync two p2pool nodes" - cargo test --release --test cucumber -- -n "?????????" + cargo test --release --test cucumber -- -n "Sync two p2pool nodes" - cargo test --release --test cucumber -- --input "?????????" + cargo test --release --test cucumber -- --input "tests/features/Sync.feature" - cargo test --release --test cucumber -- --input "?????????" + cargo test --release --test cucumber -- --input "tests/features/Sync.feature" - cargo test --release --test cucumber -- -i "?????????" + cargo test --release --test cucumber -- -i "tests/features/Sync.feature"Also applies to: 21-27, 34-40
30-30
: Add a comma after "files" for better readability.To run a specific file or files, add
-- --input glob
to the command line-To run a specific file or files add `-- --input glob` to the command line +To run a specific file or files, add `-- --input glob` to the command line🧰 Tools
🪛 LanguageTool
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add-- --input glob
to the command li...(AI_HYDRA_LEO_MISSING_COMMA)
integration_tests/.husky/_/husky.sh (1)
7-7
: Declare and assign separately to avoid masking return values.The current declaration might mask potential return values. ShellCheck recommends separating declaration and assignment.
- readonly hook_name="$(basename "$0")" + readonly hook_name + hook_name="$(basename "$0")"🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
integration_tests/tests/cucumber.rs (2)
155-162
: Consider making test configurations configurable via environment variables.The test output file path is hardcoded, and there are commented-out settings for test behavior (
fail_on_skipped
,fail_fast
). Consider making these configurable through environment variables for more flexible test execution.- let file = fs::File::create("cucumber-output-junit.xml").unwrap(); + let output_path = std::env::var("CUCUMBER_OUTPUT_PATH") + .unwrap_or_else(|_| "cucumber-output-junit.xml".to_string()); + let file = fs::File::create(&output_path) + .expect(&format!("Failed to create output file: {}", output_path)); + + let fail_on_skipped = std::env::var("CUCUMBER_FAIL_ON_SKIPPED") + .map(|v| v == "1" || v.to_lowercase() == "true") + .unwrap_or(false); + + let mut world_builder = world; + if fail_on_skipped { + world_builder = world_builder.fail_on_skipped(); + } + - world + world_builder // .fail_on_skipped() // .fail_fast() - Not yet supported in 0.18
166-166
: Handle potential errors in final output flush.The final call to
flush_stdout
doesn't handle potential errors. Consider adding error handling for a more robust implementation.- flush_stdout(&stdout_buffer); + if let Err(err) = flush_stdout(&stdout_buffer) { + eprintln!("Error flushing stdout buffer: {}", err); + }integration_tests/tests/steps/p2pool_steps.rs (4)
13-17
: Add documentation comments for the public function.The function lacks documentation comments explaining its purpose and parameters. Add Rust doc comments to improve code maintainability.
+/// Starts a p2pool seed node in the specified squad and connects it to the specified base node. +/// +/// # Parameters +/// * `world` - The test world context +/// * `p2pool_name` - The name to identify this p2pool node +/// * `squad` - The squad name this node belongs to +/// * `base_node_name` - The name of the base node to connect to #[given(expr = "I have a p2pool seed node {word} in squad {word} connected to base node {word}")] #[when(expr = "I have a p2pool seed node {word} in squad {word} connected to base node {word}")] async fn start_p2pool_seed_node(world: &mut TariWorld, p2pool_name: String, squad: String, base_node_name: String) { spawn_p2pool_node(world, true, p2pool_name, squad, base_node_name).await; }
19-23
: Add documentation comments for the public function.The function lacks documentation comments explaining its purpose and parameters. Add Rust doc comments to improve code maintainability.
+/// Starts a regular p2pool node (non-seed) in the specified squad and connects it to the specified base node. +/// +/// # Parameters +/// * `world` - The test world context +/// * `p2pool_name` - The name to identify this p2pool node +/// * `squad` - The squad name this node belongs to +/// * `base_node_name` - The name of the base node to connect to #[given(expr = "I have a p2pool node {word} in squad {word} connected to base node {word}")] #[when(expr = "I have a p2pool node {word} in squad {word} connected to base node {word}")] async fn start_p2pool_node(world: &mut TariWorld, p2pool_name: String, squad: String, base_node_name: String) { spawn_p2pool_node(world, false, p2pool_name, squad, base_node_name).await; }
25-29
: Add documentation and a semicolon to function call.The function lacks documentation comments and is missing a semicolon at the end of the function call. While Rust allows omitting the semicolon for the final expression in a function, adding it would be more consistent with typical Rust style.
+/// Adds the specified number of blocks to the p2pool node by mining them. +/// +/// # Parameters +/// * `world` - The test world context +/// * `number_of_blocks` - The number of blocks to add +/// * `p2pool_name` - The name of the p2pool node to add blocks to #[given(expr = "I add {int} blocks to p2pool node {word}")] #[when(expr = "I add {int} blocks to p2pool node {word}")] async fn add_blocks_to_p2pool_node(world: &mut TariWorld, number_of_blocks: u64, p2pool_name: String) { - mine_and_submit_tari_blocks(world, number_of_blocks, p2pool_name).await + mine_and_submit_tari_blocks(world, number_of_blocks, p2pool_name).await; }
31-35
: Add documentation and a semicolon to function call.The function lacks documentation comments and is missing a semicolon at the end of the function call.
+/// Verifies that the specified p2pool node is at the expected block height. +/// +/// # Parameters +/// * `world` - The test world context +/// * `p2pool_name` - The name of the p2pool node to verify +/// * `height` - The expected block height #[then(expr = "p2pool node {} is at height {int}")] #[when(expr = "p2pool node {} is at height {int}")] async fn verify_p2pool_block_height(world: &mut TariWorld, p2pool_name: String, height: u64) { - verify_block_height(world, p2pool_name, height).await + verify_block_height(world, p2pool_name, height).await; }integration_tests/src/p2pool_process.rs (3)
77-135
: Consider returning a Result instead of panicking.
Currently, directory creation failures triggerpanic!
. While acceptable for certain test code, transitioning to aResult
-based approach can enable more stable error propagation and reduce abrupt crashes in complex CI environments.
210-221
: Improve error feedback when spawning the node process.
If the executable is missing or lacks execution permissions, the current panic might obscure details for debugging. Consider logging the presence of the binary before attempting to spawn, or returning a descriptive error.
284-410
: Use a builder pattern for constructing command-line arguments.
The function iterates over many optional fields, resulting in a long, linear approach. Refactoring into a builder API could improve readability and maintainability, especially as more flags are added.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (33)
.github/workflows/audit.yml
(1 hunks)cargo.toml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/cargo.toml
(1 hunks)integration_tests/log4rs/cucumber.yml
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/features/Sync.feature
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)p2pool/Cargo.toml
(2 hunks)p2pool/src/cli/args.rs
(4 hunks)p2pool/src/cli/commands/mod.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(6 hunks)p2pool/src/cli/mod.rs
(1 hunks)p2pool/src/lib.rs
(1 hunks)p2pool/src/main.rs
(1 hunks)p2pool/src/server/config.rs
(4 hunks)p2pool/src/server/grpc/p2pool.rs
(7 hunks)p2pool/src/server/p2p/network.rs
(32 hunks)p2pool/src/server/p2p/peer_store.rs
(3 hunks)p2pool/src/server/server.rs
(1 hunks)p2pool/src/sharechain/in_memory.rs
(4 hunks)p2pool/src/sharechain/lmdb_block_storage.rs
(1 hunks)p2pool/src/sharechain/p2block.rs
(1 hunks)p2pool/src/sharechain/p2chain_level.rs
(1 hunks)src/cli/mod.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- src/cli/mod.rs
🚧 Files skipped from review as they are similar to previous changes (23)
- cargo.toml
- p2pool/src/server/grpc/p2pool.rs
- .github/workflows/audit.yml
- integration_tests/tests/features/Sync.feature
- p2pool/src/cli/commands/mod.rs
- p2pool/src/sharechain/p2chain_level.rs
- p2pool/src/server/p2p/peer_store.rs
- p2pool/src/server/server.rs
- p2pool/src/cli/mod.rs
- p2pool/src/server/config.rs
- integration_tests/src/base_node_process.rs
- p2pool/src/sharechain/p2block.rs
- integration_tests/src/lib.rs
- p2pool/src/sharechain/lmdb_block_storage.rs
- p2pool/Cargo.toml
- integration_tests/src/world.rs
- integration_tests/tests/steps/base_node_steps.rs
- p2pool/src/sharechain/in_memory.rs
- p2pool/src/lib.rs
- integration_tests/cargo.toml
- p2pool/src/cli/args.rs
- integration_tests/log4rs/cucumber.yml
- p2pool/src/server/p2p/network.rs
🧰 Additional context used
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate declare
options instead.
(SC2316)
🪛 LanguageTool
integration_tests/README.md
[uncategorized] ~5-~5: Possible missing comma found.
Context: ...s ## Procedure to run In its simplest form you can run the tests from the project ...
(AI_HYDRA_LEO_MISSING_COMMA)
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add -- --input glob
to the command li...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: test (esmeralda)
- GitHub Check: Cucumber tests / Base Layer
🔇 Additional comments (10)
integration_tests/tests/steps/mod.rs (3)
21-33
: Clean up commented code and improve error handling.The function has several issues:
- Contains commented-out code that should be removed.
- Uses
back()
to check the last error but doesn't clear it, which could affect subsequent tests.- Comments suggest incomplete implementation.
Consider this implementation:
#[then(regex = r"I receive an error containing '(.*)'")] async fn receive_an_error(world: &mut TariWorld, error: String) { - match world.errors.back() { + match world.errors.pop_back() { Some(err) => assert_eq!(err, &error), None => panic!("Should have received an error"), }; - - // No-op. - // Was not implemented in previous suite, gave it a quick try but missing other peices - - // assert!(world.errors.len() > 1); - // assert!(world.errors.pop_front().unwrap().contains(&error)) }
16-19
: LGTM: Well-implemented wait function.The
wait_seconds
function is correctly implemented with tokio's sleep functionality.
6-8
: LGTM: Appropriate imports for Cucumber testing.The imports are correctly set up for the Cucumber testing framework and the custom TariWorld context.
p2pool/src/main.rs (1)
13-13
: Module reorganization: CLI functionality moved to shared library.The import has been changed from the local crate to the
sha_p2pool
crate, which is part of a broader restructuring to support the new Cucumber tests. This is a good architectural change that makes the CLI functionality more accessible for integration testing.As noted in a previous review comment, several files still contain references to
crate::cli
. Please ensure that all references to the CLI module are consistently updated across the codebase to avoid compilation errors and maintain a coherent structure.integration_tests/tests/cucumber.rs (1)
113-163
: Clean up test configuration and improve error handling.The test configuration contains commented-out code, hard-coded values, and minimal error handling.
+/// Maximum number of concurrent test scenarios +const DEFAULT_MAX_CONCURRENT_SCENARIOS: usize = 5; + #[cfg(test)] std::io::set_output_capture(Some(stdout_buffer.clone())); -// Never move this line below the runtime creation!!! It will cause that any new thread created via task::spawn will -// not be affected by the output capture. -// let stdout_buffer_clone = stdout_buffer.clone(); + +// IMPORTANT: Never move the output capture setup below the runtime creation +// as it will prevent capture in task::spawn threads + let runtime = Runtime::new().unwrap(); runtime.block_on(async { + let max_concurrent = std::env::var("MAX_CONCURRENT_SCENARIOS") + .ok() + .and_then(|v| v.parse().ok()) + .unwrap_or(DEFAULT_MAX_CONCURRENT_SCENARIOS); + let world = TariWorld::cucumber() .repeat_failed() - // following config needed to use eprint statements in the tests - .max_concurrent_scenarios(5) + .max_concurrent_scenarios(max_concurrent) .after(move |_feature, _rule, scenario, ev, maybe_world| { - // let stdout_buffer = stdout_buffer_clone.clone(); Box::pin(async move { - // flush_stdout(&stdout_buffer); match ev { ScenarioFinished::StepFailed(_capture_locations, _location, _error) => { error!(target: LOG_TARGET, "Scenario failed"); }, // ... rest of the match arms ... } if let Some(maybe_world) = maybe_world { maybe_world.after(scenario).await; } }) }) // ... rest of the configuration ... - let file = fs::File::create("cucumber-output-junit.xml").unwrap(); + let output_path = std::env::var("CUCUMBER_OUTPUT_PATH") + .unwrap_or_else(|_| "cucumber-output-junit.xml".to_string()); + let file = fs::File::create(&output_path) + .map_err(|e| format!("Failed to create output file {}: {}", output_path, e))?;integration_tests/src/p2pool_process.rs (2)
44-48
: Drop trait usage looks appropriate.
Killing the process inside theDrop
trait helps ensure a clean teardown. However, be mindful that panic-based error handling can make debugging difficult if failures occur in the kill process.
253-281
: Add a short delay in the port-free check loop.
This loop could spin CPU resources unnecessarily. Adding a minor sleep (e.g., 50-100 ms) while waiting for ports to become free would reduce system load.for _ in 0..20 { + std::thread::sleep(std::time::Duration::from_millis(100)); if !p2p_port_closed && ... }
integration_tests/src/miner.rs (3)
27-71
: Avoid unchecked unwraps in the block mining loop.
Multipleunwrap()
calls here can cause the test to panic unexpectedly on transient errors. Consider handling them gracefully and returning errors to facilitate robust test reporting.
85-99
: Refactor to return errors rather than panicking on nonce exhaustion.
Panic on a high iteration count can result in an abrupt test termination. AResult
would provide more test control, allowing callers to handle failure.
101-135
: Gracefully handle HTTP request failures inverify_block_height
.
Similar to the mining loop,unwrap()
calls can cause early panics in the test. A robust error-handling mechanism (e.g., match orResult
) would provide safer test execution and clearer error messages.
20abe0c
to
d378e81
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (5)
integration_tests/.husky/_/husky.sh (1)
20-20
:⚠️ Potential issueCorrect the export of a read-only variable.
The current line incorrectly combines
export
withreadonly
and results in ShellCheck error SC2316. Instead, separate the two operations.- export readonly husky_skip_init=1 + readonly husky_skip_init=1 + export husky_skip_init🧰 Tools
🪛 Shellcheck (0.10.0)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate
declare
options instead.(SC2316)
integration_tests/tests/steps/base_node_steps.rs (1)
23-23
:⚠️ Potential issueFix incorrect debug message.
The debug message incorrectly states "start_seed_base_node" instead of "start_base_node".
- debug!(target: LOG_TARGET, "start_seed_base_node: spawned '{}'", name); + debug!(target: LOG_TARGET, "start_base_node: spawned '{}'", name);integration_tests/src/miner.rs (3)
27-71
:⚠️ Potential issueHandle errors gracefully in block mining.
Multiple unwrap calls could cause panics. Consider proper error handling by returning a Result type and replacing unwrap calls with error propagation.
-pub async fn mine_and_submit_tari_blocks(world: &mut TariWorld, number_of_blocks: u64, p2pool_name: String) { +pub async fn mine_and_submit_tari_blocks(world: &mut TariWorld, number_of_blocks: u64, p2pool_name: String) -> Result<(), Box<dyn std::error::Error>> { debug!(target: LOG_TARGET, "add {} blocks to p2pool node '{}'", number_of_blocks, p2pool_name); let mut p2pool_client = world .get_p2pool_grpc_client(&p2pool_name) .await - .unwrap_or_else(|_| panic!("Couldn't get the p2pool node grpc client '{}'", p2pool_name)); + .map_err(|e| format!("Couldn't get the p2pool node grpc client '{}': {}", p2pool_name, e))?; for i in 0..number_of_blocks { let wallet_payment_address = new_random_dual_tari_address().to_hex(); let block = p2pool_client .get_new_block(GetNewBlockRequest { pow: Some(PowAlgo { pow_algo: PowAlgos::Sha3x as i32, }), coinbase_extra: i.to_string(), wallet_payment_address: wallet_payment_address.clone(), }) .await - .unwrap(); + .map_err(|e| format!("Failed to get new block: {}", e))?; let block_response = block.into_inner(); let target_difficulty = block_response.target_difficulty; - let block_result = block_response.block.unwrap(); - let grpc_block = block_result.block.unwrap(); - let mut block = Block::try_from(grpc_block).unwrap(); + let block_result = block_response.block.ok_or("Block response missing block")?; + let grpc_block = block_result.block.ok_or("Block result missing block")?; + let mut block = Block::try_from(grpc_block) + .map_err(|e| format!("Failed to convert gRPC block: {}", e))?; debug!( target: LOG_TARGET, "mining block '{}' (height '{}') with target difficulty '{}' and wallet_payment_address '{}'", i, block.header.height, target_difficulty, wallet_payment_address ); - find_sha3x_header_with_achieved_difficulty(&mut block.header, Difficulty::from_u64(target_difficulty).unwrap()); + let difficulty = Difficulty::from_u64(target_difficulty) + .map_err(|e| format!("Invalid target difficulty: {}", e))?; + find_sha3x_header_with_achieved_difficulty(&mut block.header, difficulty)?; p2pool_client .submit_block(SubmitBlockRequest { - block: Some(GrpcBlock::try_from(block).unwrap()), + block: Some(GrpcBlock::try_from(block) + .map_err(|e| format!("Failed to convert block to gRPC: {}", e))?), wallet_payment_address: new_random_dual_tari_address().to_hex(), }) .await - .unwrap(); + .map_err(|e| format!("Failed to submit block: {}", e))?; } debug!(target: LOG_TARGET, "{} blocks added to p2pool node '{}'", number_of_blocks, p2pool_name); + Ok(()) }
85-99
: 🛠️ Refactor suggestionImprove error handling in find_sha3x_header_with_achieved_difficulty.
The function uses unwrap on the result of sha3x_difficulty and panics after 100 million tries. Consider returning a Result type for better error handling.
-pub fn find_sha3x_header_with_achieved_difficulty(header: &mut BlockHeader, achieved_difficulty: Difficulty) { +pub fn find_sha3x_header_with_achieved_difficulty(header: &mut BlockHeader, achieved_difficulty: Difficulty) -> Result<(), String> { let mut num_tries = 0; - while sha3x_difficulty(header).unwrap() != achieved_difficulty { + while sha3x_difficulty(header).map_err(|e| format!("Failed to calculate difficulty: {}", e))? != achieved_difficulty { header.nonce += 1; num_tries += 1; if num_tries > 100_000_000 { // Just in case we burn a hole in the CI server - panic!( - "Could not find a nonce for achieved difficulty {} in time", - achieved_difficulty - ); + return Err(format!( + "Could not find a nonce for achieved difficulty {} in time", + achieved_difficulty + )); } } + Ok(()) }
101-135
: 🛠️ Refactor suggestionHandle errors gracefully in verify_block_height.
Similar to the other functions, this function uses unwrap in multiple places and panics if the height isn't reached, which could lead to unhandled crashes.
-pub async fn verify_block_height(world: &mut TariWorld, p2pool_name: String, height: u64) { +pub async fn verify_block_height(world: &mut TariWorld, p2pool_name: String, height: u64) -> Result<(), Box<dyn std::error::Error>> { debug!(target: LOG_TARGET, "verify '{}' is at height {}", p2pool_name, height); - let p2pool_process = world.get_p2pool_node(&p2pool_name).unwrap(); + let p2pool_process = world.get_p2pool_node(&p2pool_name) + .ok_or_else(|| format!("Could not find p2pool node '{}'", p2pool_name))?; if !p2pool_process.config.http_server.enabled { - panic!("p2pool node '{}' doesn't have the http server enabled", p2pool_name); + return Err(format!("p2pool node '{}' doesn't have the http server enabled", p2pool_name).into()); } let stats_url = format!("http://127.0.0.1:{}/stats", p2pool_process.config.http_server.port); let p2pool_client = Client::new(); let mut local_height = 0; let mut stats: Value = Value::Null; for i in 0..(THIRTY_SECONDS_WITH_HALF_SECOND_SLEEP) { - let response = p2pool_client.get(stats_url.clone()).send().await.unwrap(); + let response = p2pool_client.get(stats_url.clone()).send().await + .map_err(|e| format!("Failed to send request to {}: {}", stats_url, e))?; if response.status().is_success() { - stats = response.json().await.unwrap(); - local_height = stats["sha3x_stats"]["height"].as_u64().unwrap() + 1; + stats = response.json().await + .map_err(|e| format!("Failed to parse JSON response: {}", e))?; + local_height = stats["sha3x_stats"]["height"].as_u64() + .ok_or_else(|| format!("Could not extract height from stats: {:?}", stats))? + 1; debug!(target: LOG_TARGET, "{}: '{}' is at height {}, need to be at {}", i, p2pool_name, local_height, height); if local_height >= height { debug!(target: LOG_TARGET, "Height achieved. Stats: {:?}", stats); - return; + return Ok(()); } } else { - panic!("Failed to query {} for stats: {}", stats_url, response.status()); + return Err(format!("Failed to query {} for stats: {}", stats_url, response.status()).into()); } tokio::time::sleep(Duration::from_millis(HALF_SECOND)).await; } error!(target: LOG_TARGET, "Height not achieved. Stats: {:?}", stats); - panic!( - "p2pool node '{}' didn't synchronize successfully at height {}, current chain height {}", - p2pool_name, height, local_height - ); + Err(format!( + "p2pool node '{}' didn't synchronize successfully at height {}, current chain height {}", + p2pool_name, height, local_height + ).into()) }
🧹 Nitpick comments (9)
integration_tests/README.md (2)
5-5
: Add a comma for better readability.In this instruction line, a comma would improve clarity between the clauses.
-In its simplest form you can run the tests from the project route with `cargo test --release --test cucumber` +In its simplest form, you can run the tests from the project route with `cargo test --release --test cucumber`🧰 Tools
🪛 LanguageTool
[uncategorized] ~5-~5: Possible missing comma found.
Context: ...s ## Procedure to run In its simplest form you can run the tests from the project ...(AI_HYDRA_LEO_MISSING_COMMA)
11-11
: Replace placeholder values with concrete examples.The pattern "?????????" is used as a placeholder in all examples. Using concrete examples would make the documentation more helpful for users.
For example:
- cargo test --release --test cucumber -- --name "?????????" + cargo test --release --test cucumber -- --name "Sync blocks from tip"And similar replacements for the other placeholders in the file.
Also applies to: 14-14, 34-34, 37-37, 40-40
integration_tests/.husky/_/husky.sh (1)
7-7
: Declare and assign variable separately to avoid masking return values.This is a best practice in shell scripting to ensure return values aren't inadvertently masked.
- readonly hook_name="$(basename "$0")" + readonly hook_name + hook_name="$(basename "$0")"🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
integration_tests/tests/steps/base_node_steps.rs (1)
14-16
: Consider extracting common sleep pattern to a helper function.All three functions have the same 2-second sleep pattern. This could be extracted to a helper function to reduce code duplication.
async fn wait_after_spawn(name: &str) { debug!(target: LOG_TARGET, "Waiting 2 seconds after spawning '{}'", name); tokio::time::sleep(Duration::from_secs(2)).await; }Then use this in each function after spawning the node.
Also applies to: 22-24, 30-32
integration_tests/log4rs/cucumber.yml (2)
69-74
: Remove or clarify commented-out root logger configuration.Lines 69-74 are commented out. If they are no longer needed, consider removing them for clarity. If you plan to reintroduce them later, add a TODO comment explaining the reason.
75-79
: Beware of potential performance overhead with debug root logging.Running the root logger at debug level can generate extensive logs, especially in long-running tests. Consider lowering it to
info
or using environment-driven overrides.integration_tests/src/p2pool_process.rs (1)
445-485
: Consider making the maximum wait time configurable.At present, the code can wait up to 3 minutes for the peer to connect. This might be too long for shorter test cycles. Making it configurable or lowering this threshold may improve your iteration speed.
integration_tests/src/miner.rs (2)
36-44
: Re-use wallet_payment_address throughout the process.The wallet payment address is properly cloned for use in the get_new_block request, but it would be better to generate it once before all operations.
for i in 0..number_of_blocks { let wallet_payment_address = new_random_dual_tari_address().to_hex(); let block = p2pool_client .get_new_block(GetNewBlockRequest { pow: Some(PowAlgo { pow_algo: PowAlgos::Sha3x as i32, }), coinbase_extra: i.to_string(), - wallet_payment_address: wallet_payment_address.clone(), + wallet_payment_address: wallet_payment_address, })
113-128
: Add a timeout message to improve debugging.When waiting for the node to reach a specific height, adding a message indicating the timeout duration would improve the clarity of logs.
+ let timeout_seconds = (THIRTY_SECONDS_WITH_HALF_SECOND_SLEEP * HALF_SECOND) / 1000; + debug!(target: LOG_TARGET, "Waiting up to {} seconds for height {} to be reached", timeout_seconds, height); for i in 0..(THIRTY_SECONDS_WITH_HALF_SECOND_SLEEP) { let response = p2pool_client.get(stats_url.clone()).send().await.unwrap(); if response.status().is_success() {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (34)
.github/workflows/audit.yml
(1 hunks)cargo.toml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/cargo.toml
(1 hunks)integration_tests/log4rs/cucumber.yml
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/features/Sync.feature
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)p2pool/Cargo.toml
(2 hunks)p2pool/src/cli/args.rs
(4 hunks)p2pool/src/cli/commands/mod.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(6 hunks)p2pool/src/cli/mod.rs
(1 hunks)p2pool/src/lib.rs
(1 hunks)p2pool/src/main.rs
(1 hunks)p2pool/src/server/config.rs
(4 hunks)p2pool/src/server/grpc/p2pool.rs
(7 hunks)p2pool/src/server/http/stats_collector.rs
(1 hunks)p2pool/src/server/p2p/network.rs
(32 hunks)p2pool/src/server/p2p/peer_store.rs
(3 hunks)p2pool/src/server/server.rs
(1 hunks)p2pool/src/sharechain/in_memory.rs
(4 hunks)p2pool/src/sharechain/lmdb_block_storage.rs
(1 hunks)p2pool/src/sharechain/p2block.rs
(1 hunks)p2pool/src/sharechain/p2chain_level.rs
(1 hunks)src/cli/mod.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- src/cli/mod.rs
🚧 Files skipped from review as they are similar to previous changes (23)
- p2pool/src/server/server.rs
- integration_tests/tests/features/Sync.feature
- .github/workflows/audit.yml
- p2pool/src/server/grpc/p2pool.rs
- cargo.toml
- p2pool/src/sharechain/lmdb_block_storage.rs
- p2pool/src/sharechain/p2chain_level.rs
- p2pool/src/server/p2p/peer_store.rs
- integration_tests/tests/cucumber.rs
- p2pool/Cargo.toml
- p2pool/src/cli/mod.rs
- p2pool/src/sharechain/p2block.rs
- integration_tests/src/base_node_process.rs
- p2pool/src/cli/commands/mod.rs
- p2pool/src/server/config.rs
- p2pool/src/main.rs
- p2pool/src/server/p2p/network.rs
- integration_tests/tests/steps/p2pool_steps.rs
- p2pool/src/sharechain/in_memory.rs
- p2pool/src/cli/commands/util.rs
- p2pool/src/lib.rs
- integration_tests/cargo.toml
- integration_tests/src/world.rs
🧰 Additional context used
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate declare
options instead.
(SC2316)
🪛 LanguageTool
integration_tests/README.md
[uncategorized] ~5-~5: Possible missing comma found.
Context: ...s ## Procedure to run In its simplest form you can run the tests from the project ...
(AI_HYDRA_LEO_MISSING_COMMA)
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test (esmeralda)
🔇 Additional comments (10)
p2pool/src/cli/args.rs (5)
16-16
: Exposing StartArgs struct and base_dir field for external access.These visibility changes make the struct and field accessible from outside modules, which aligns with the PR's cucumber testing objective. Integration tests will need to access these components.
Also applies to: 19-19
141-155
: Well-documented CLI arguments added for testing purposes.The new arguments enable important testing capabilities:
export_libp2p_info
facilitates test handshaking in cucumber testsnetwork_silence_delay
provides control over mining submission timing- The minimum difficulty parameters allow for testing with controlled mining parameters
These additions are well-documented and made optional to maintain backward compatibility.
189-189
: Added Debug derivation for improved diagnostic capabilities.Adding the
Debug
trait to theCli
struct enables better debugging in tests and logging during test execution.
222-222
: Command handling logic refactored for better reusability.Extracting the command handling logic to the
run_with_cli
function improves code organization and allows for more flexible component testing.
228-243
: Extracted command handling into a public function for testing accessibility.The new
run_with_cli
function maintains the same functionality while making the command handling logic accessible to tests. This is a clean implementation that follows good code organization principles.p2pool/src/server/http/stats_collector.rs (1)
194-285
: LGTM! Clean restructuring of the asynchronous loop.The modifications to the asynchronous loop maintain the original functionality while improving code organization. The refactored structure handles shutdown signals, statistics reporting, and requests in a cleaner, more readable manner.
integration_tests/tests/steps/mod.rs (1)
26-38
: Remove leftover commented code.The commented assertions can be removed or replaced with actual usage, as previously discussed.
integration_tests/src/lib.rs (2)
19-30
: Potential infinite loop in port selection.This approach might lead to a tight loop if no ports are available. Consider implementing a maximum iteration count or a short sleep, as suggested before.
60-78
: Unwrap calls may panic if peer strings are invalid.This can lead to test panics if the peer names do not exist. Consider using safe unwrapping or returning a Result for better error handling, as previously discussed.
integration_tests/src/p2pool_process.rs (1)
259-288
: Add a short sleep in the port-free check loop.Spinning up to 20 times with no delay might cause excessive CPU usage, as noted previously. Adding a small delay (for example, 100 ms) can help mitigate this.
d378e81
to
c32433c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (4)
integration_tests/.husky/_/husky.sh (1)
20-20
:⚠️ Potential issueCorrect the export of a read-only variable.
The current line incorrectly combines
export
withreadonly
and results in ShellCheck error SC2316.Instead, separate the two operations:
- export readonly husky_skip_init=1 + readonly husky_skip_init=1 + export husky_skip_init🧰 Tools
🪛 Shellcheck (0.10.0)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate
declare
options instead.(SC2316)
integration_tests/tests/steps/mod.rs (3)
16-19
:⚠️ Potential issuePotential name conflict with function.
Both this function and the one at line 22 have the same name
wait_seconds
, which could cause compilation issues.Consider renaming one of them:
#[when(expr = "I wait {int} seconds")] -async fn wait_seconds(_world: &mut TariWorld, seconds: u64) { +async fn wait_seconds_for_test(_world: &mut TariWorld, seconds: u64) { tokio::time::sleep(Duration::from_secs(seconds)).await; }
21-24
:⚠️ Potential issueDuplicate function name.
This function has the same name as the one at line 16, which could cause compilation issues.
Consider renaming it:
#[then(expr = "I wait {int} seconds and stop")] -async fn wait_seconds_and_stop(_world: &mut TariWorld, seconds: u64) { +async fn wait_seconds_and_stop(_world: &mut TariWorld, seconds: u64) { tokio::time::sleep(Duration::from_secs(seconds)).await; }
26-38
: 🛠️ Refactor suggestionClean up commented code and improve error handling.
The function has several issues including commented-out code and not clearing the error after checking it.
Consider this implementation:
#[then(regex = r"I receive an error containing '(.*)'")] async fn receive_an_error(world: &mut TariWorld, error: String) { - match world.errors.back() { + match world.errors.pop_back() { Some(err) => assert_eq!(err, &error), None => panic!("Should have received an error"), }; - - // No-op. - // Was not implemented in previous suite, gave it a quick try but missing other peices - - // assert!(world.errors.len() > 1); - // assert!(world.errors.pop_front().unwrap().contains(&error)) }
🧹 Nitpick comments (8)
integration_tests/README.md (2)
5-5
: Consider adding a comma for better readability.In its simplest form, you can run the tests from the project route with
cargo test --release --test cucumber
-In its simplest form you can run the tests from the project route with `cargo test --release --test cucumber` +In its simplest form, you can run the tests from the project route with `cargo test --release --test cucumber`🧰 Tools
🪛 LanguageTool
[uncategorized] ~5-~5: Possible missing comma found.
Context: ...s ## Procedure to run In its simplest form you can run the tests from the project ...(AI_HYDRA_LEO_MISSING_COMMA)
30-30
: Consider adding a comma and varying sentence beginnings.For better readability, add a comma after "files" and consider varying the beginning of sentences in your example descriptions.
-To run a specific file or files add `-- --input glob` to the command line +To run a specific file or files, add `-- --input glob` to the command line🧰 Tools
🪛 LanguageTool
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add-- --input glob
to the command li...(AI_HYDRA_LEO_MISSING_COMMA)
integration_tests/.husky/_/husky.sh (2)
7-7
: Consider declaring and assigning separately.ShellCheck warns about potential masking of return values when declaring and assigning in the same statement.
- readonly hook_name="$(basename "$0")" + hook_name="$(basename "$0")" + readonly hook_name🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
24-27
: Consider quoting the exitCode variable.When comparing and using shell variables, it's good practice to quote them to handle spaces and special characters correctly.
- if [ $exitCode != 0 ]; then + if [ "$exitCode" != 0 ]; then echo "husky - $hook_name hook exited with code $exitCode (error)" - exit $exitCode + exit "$exitCode" fiintegration_tests/tests/steps/mod.rs (1)
12-14
: Consider using durations for time-related constants.For constants representing time periods, consider using Duration directly instead of raw u64 values, which would make their usage more type-safe.
-pub const CONFIRMATION_PERIOD: u64 = 4; -pub const TWO_MINUTES_WITH_HALF_SECOND_SLEEP: u64 = 240; -pub const HALF_SECOND: u64 = 500; +pub const CONFIRMATION_PERIOD: Duration = Duration::from_secs(4); +pub const TWO_MINUTES_WITH_HALF_SECOND_SLEEP: Duration = Duration::from_secs(240); +pub const HALF_SECOND: Duration = Duration::from_millis(500);integration_tests/log4rs/cucumber.yml (1)
69-74
: Commented out code should be removed.It appears that commented-out code was left in the configuration file. If this is no longer needed, it should be removed entirely.
# We don't want prints during cucumber test, everything useful will in logs. -# root: -# level: warn -# appenders: -# - stdoutintegration_tests/src/p2pool_process.rs (1)
259-269
: Wait for child process to exit afterkill()
Currently, the
kill
method sends a kill signal to the child process but never callswait()
to fully reap the process. Failing to reap a process can lead to zombie processes on some platforms.Consider adding:
if let Err(e) = child.kill() { println!( "Failed to kill p2pool node: '{}', process id: {}, '{}'", self.name.clone(), child.id(), e ); } +if let Err(e) = child.wait() { + println!( + "Failed to wait for p2pool node: '{}', process id: {}, '{}'", + self.name.clone(), + child.id(), + e + ); +}integration_tests/tests/steps/p2pool_steps.rs (1)
19-20
: Avoid fixed 2-second sleeps in favor of readiness checksUsing fixed-duration sleeps can introduce flakiness if the environment is slower or faster than expected. Consider actively polling the node or using readiness signals for more robust test execution flow.
Also applies to: 26-27, 33-34, 40-41, 47-48
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (34)
.github/workflows/audit.yml
(1 hunks)cargo.toml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/cargo.toml
(1 hunks)integration_tests/log4rs/cucumber.yml
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/features/Sync.feature
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)p2pool/Cargo.toml
(2 hunks)p2pool/src/cli/args.rs
(4 hunks)p2pool/src/cli/commands/mod.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(6 hunks)p2pool/src/cli/mod.rs
(1 hunks)p2pool/src/lib.rs
(1 hunks)p2pool/src/main.rs
(1 hunks)p2pool/src/server/config.rs
(4 hunks)p2pool/src/server/grpc/p2pool.rs
(7 hunks)p2pool/src/server/http/stats_collector.rs
(1 hunks)p2pool/src/server/p2p/network.rs
(32 hunks)p2pool/src/server/p2p/peer_store.rs
(3 hunks)p2pool/src/server/server.rs
(1 hunks)p2pool/src/sharechain/in_memory.rs
(4 hunks)p2pool/src/sharechain/lmdb_block_storage.rs
(1 hunks)p2pool/src/sharechain/p2block.rs
(1 hunks)p2pool/src/sharechain/p2chain_level.rs
(1 hunks)src/cli/mod.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- src/cli/mod.rs
🚧 Files skipped from review as they are similar to previous changes (26)
- p2pool/src/sharechain/p2chain_level.rs
- p2pool/src/server/server.rs
- integration_tests/tests/features/Sync.feature
- .github/workflows/audit.yml
- p2pool/src/sharechain/lmdb_block_storage.rs
- p2pool/src/server/grpc/p2pool.rs
- integration_tests/tests/steps/base_node_steps.rs
- p2pool/src/server/p2p/peer_store.rs
- p2pool/Cargo.toml
- cargo.toml
- integration_tests/tests/cucumber.rs
- p2pool/src/cli/mod.rs
- p2pool/src/cli/commands/mod.rs
- p2pool/src/server/http/stats_collector.rs
- integration_tests/src/base_node_process.rs
- p2pool/src/server/p2p/network.rs
- p2pool/src/main.rs
- p2pool/src/cli/commands/util.rs
- integration_tests/cargo.toml
- p2pool/src/server/config.rs
- p2pool/src/lib.rs
- p2pool/src/sharechain/in_memory.rs
- integration_tests/src/lib.rs
- p2pool/src/cli/args.rs
- p2pool/src/sharechain/p2block.rs
- integration_tests/src/world.rs
🧰 Additional context used
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate declare
options instead.
(SC2316)
🪛 LanguageTool
integration_tests/README.md
[uncategorized] ~5-~5: Possible missing comma found.
Context: ...s ## Procedure to run In its simplest form you can run the tests from the project ...
(AI_HYDRA_LEO_MISSING_COMMA)
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add -- --input glob
to the command li...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: test (esmeralda)
- GitHub Check: machete
🔇 Additional comments (10)
integration_tests/README.md (4)
11-11
: Replace placeholder in example.The current example shows placeholders (
?????????
) instead of actual test names.Consider replacing the placeholder with an actual test name from the test suite, which would make the example more helpful for users.
34-37
: Replace placeholders in examples.The examples contain placeholder values (
?????????
) instead of actual file paths or glob patterns.Consider replacing them with realistic examples like:
-cargo test --release --test cucumber -- --input "?????????" +cargo test --release --test cucumber -- --input "tests/features/Sync.feature" -cargo test --release --test cucumber -- --input "?????????" +cargo test --release --test cucumber -- --input "tests/features/*.feature"
43-47
: Good documentation of work-in-progress state.The notes section clearly communicates the current state of the test suite and expectations for future development.
49-52
: Clear CI execution strategy.The CI section clearly documents when different tests will run, which is valuable for contributors.
integration_tests/log4rs/cucumber.yml (2)
75-78
: Consider restricting root logger level.The root logger is set to
debug
level, which can be verbose. Given the comment above about not wanting prints during cucumber tests, you might want to set this to a higher level likeinfo
orwarn
.root: - level: debug + level: info appenders: - stdout
56-61
: Inconsistent size limit for appenders.The
other
appender has a size limit of 10MB while all other appenders use 100MB. If this is intentional, consider adding a comment explaining why, otherwise consider making them consistent.trigger: kind: size - limit: 10mb + limit: 100mb # Consistent with other appendersintegration_tests/src/p2pool_process.rs (1)
271-286
: Reintroduce a short sleep in the loop for checking freed portsThis loop attempts up to 20 re-binds with no delay, which can cause excessive CPU usage. As previously suggested, add a small delay (e.g.,
std::time::Duration::from_millis(100)
) within each iteration.loop { + std::thread::sleep(std::time::Duration::from_millis(100)); if !p2p_port_closed && TcpListener::bind(("127.0.0.1", self.config.p2p_port)).is_ok() { p2p_port_closed = true; } if !grpc_port_closed && TcpListener::bind(("127.0.0.1", self.config.grpc_port)).is_ok() { grpc_port_closed = true; } attempts += 1; if attempts >= 20 { break; } }
integration_tests/src/miner.rs (3)
27-71
: Use robust error handling instead of multipleunwrap
calls in block miningThis function relies on multiple
unwrap
calls, which can panic if the gRPC client or the remote node is inaccessible. Similar feedback was provided before to handle these errors gracefully by returning aResult
or logging them appropriately.
85-99
: Return aResult
fromfind_sha3x_header_with_achieved_difficulty
Previously, a suggestion was made to replace the direct
unwrap()
andpanic!
calls with aResult
to provide more controlled error handling. Doing so can improve resilience in CI and reduce the risk of abrupt test failures.
101-140
: Avoid panics inverify_block_height
by returning aResult
This function uses
unwrap
and ultimatelypanic!
when the node fails to reach the required height, which can abruptly halt the entire test suite. A prior recommendation advised returning aResult
so the test harness can handle errors more gracefully.
c32433c
to
66cacc5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (7)
p2pool/src/cli/commands/util.rs (2)
181-181
:⚠️ Potential issueFix incorrect squad calculation.
The expression
min(1, config.p2p_service.num_squads)
will always return 1, making the squad calculation incorrect. This effectively ignores the num_squads configuration.Apply this diff:
- (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % min(1, config.p2p_service.num_squads); + (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % config.p2p_service.num_squads;
233-236
:⚠️ Potential issueReplace non-existent
fs::exists
with proper existence check.The
fs::exists
function doesn't exist in the standard library. UsePath::exists()
instead.Apply this diff:
- if let Ok(false) = fs::exists(parent) { + if !parent.exists() { fs::create_dir_all(parent)?; }integration_tests/.husky/_/husky.sh (1)
20-20
:⚠️ Potential issueCorrect the export of a read-only variable.
The current line incorrectly combines
export
withreadonly
and results in ShellCheck error SC2316. Instead, separate the two operations.-export readonly husky_skip_init=1 +readonly husky_skip_init=1 +export husky_skip_init🧰 Tools
🪛 Shellcheck (0.10.0)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate
declare
options instead.(SC2316)
integration_tests/src/miner.rs (4)
27-71
: 🛠️ Refactor suggestionHandle errors gracefully in mine_and_submit_tari_blocks.
Multiple unwrap calls could cause panics. Consider proper error handling.
-pub async fn mine_and_submit_tari_blocks(world: &mut TariWorld, number_of_blocks: u64, p2pool_name: String) { +pub async fn mine_and_submit_tari_blocks(world: &mut TariWorld, number_of_blocks: u64, p2pool_name: String) -> anyhow::Result<()> { debug!(target: LOG_TARGET, "add {} blocks to p2pool node '{}'", number_of_blocks, p2pool_name); let mut p2pool_client = world .get_p2pool_grpc_client(&p2pool_name) .await - .unwrap_or_else(|_| panic!("Couldn't get the p2pool node grpc client '{}'", p2pool_name)); + .map_err(|e| anyhow::anyhow!("Couldn't get the p2pool node grpc client '{}': {}", p2pool_name, e))?; for i in 0..number_of_blocks { let wallet_payment_address = new_random_dual_tari_address().to_hex(); let block = p2pool_client .get_new_block(GetNewBlockRequest { pow: Some(PowAlgo { pow_algo: PowAlgos::Sha3x as i32, }), coinbase_extra: i.to_string(), wallet_payment_address: wallet_payment_address.clone(), }) .await - .unwrap(); + .map_err(|e| anyhow::anyhow!("Failed to get new block: {}", e))?; let block_response = block.into_inner(); let target_difficulty = block_response.target_difficulty; - let block_result = block_response.block.unwrap(); - let grpc_block = block_result.block.unwrap(); - let mut block = Block::try_from(grpc_block).unwrap(); + let block_result = block_response.block.ok_or_else(|| anyhow::anyhow!("Block response missing block"))?; + let grpc_block = block_result.block.ok_or_else(|| anyhow::anyhow!("Block result missing block"))?; + let mut block = Block::try_from(grpc_block) + .map_err(|e| anyhow::anyhow!("Failed to convert gRPC block: {}", e))?; debug!( target: LOG_TARGET, "mining block '{}' (height '{}') with target difficulty '{}' and wallet_payment_address '{}'", i, block.header.height, target_difficulty, wallet_payment_address ); - find_sha3x_header_with_achieved_difficulty(&mut block.header, Difficulty::from_u64(target_difficulty).unwrap()); + let difficulty = Difficulty::from_u64(target_difficulty) + .map_err(|_| anyhow::anyhow!("Invalid target difficulty: {}", target_difficulty))?; + find_sha3x_header_with_achieved_difficulty(&mut block.header, difficulty)?; p2pool_client .submit_block(SubmitBlockRequest { - block: Some(GrpcBlock::try_from(block).unwrap()), - wallet_payment_address: new_random_dual_tari_address().to_hex(), + block: Some(GrpcBlock::try_from(block) + .map_err(|e| anyhow::anyhow!("Failed to convert block to gRPC: {}", e))?), + wallet_payment_address: wallet_payment_address.clone(), }) .await - .unwrap(); + .map_err(|e| anyhow::anyhow!("Failed to submit block: {}", e))?; } debug!(target: LOG_TARGET, "{} blocks added to p2pool node '{}'", number_of_blocks, p2pool_name); + Ok(()) }
64-64
:⚠️ Potential issueUse the same wallet payment address for submit_block as used in get_new_block.
The payment address is correctly used in the get_new_block request, but then a new random address is generated for submit_block. This is inconsistent and could lead to unexpected behavior.
- wallet_payment_address: new_random_dual_tari_address().to_hex(), + wallet_payment_address: wallet_payment_address.clone(),
85-99
: 🛠️ Refactor suggestionImprove error handling in find_sha3x_header_with_achieved_difficulty.
The function uses unwrap on the result of sha3x_difficulty and panics after 100 million tries. Consider returning a Result type for better error handling.
-pub fn find_sha3x_header_with_achieved_difficulty(header: &mut BlockHeader, achieved_difficulty: Difficulty) { +pub fn find_sha3x_header_with_achieved_difficulty(header: &mut BlockHeader, achieved_difficulty: Difficulty) -> anyhow::Result<()> { let mut num_tries = 0; - while sha3x_difficulty(header).unwrap() != achieved_difficulty { + while sha3x_difficulty(header).map_err(|e| anyhow::anyhow!("Failed to calculate difficulty: {}", e))? != achieved_difficulty { header.nonce += 1; num_tries += 1; if num_tries > 100_000_000 { // Just in case we burn a hole in the CI server - panic!( - "Could not find a nonce for achieved difficulty {} in time", - achieved_difficulty - ); + return Err(anyhow::anyhow!( + "Could not find a nonce for achieved difficulty {} in time", + achieved_difficulty + )); } } + Ok(()) }
101-150
: 🛠️ Refactor suggestionHandle errors gracefully in verify_block_height.
Similar to the mine_and_submit_tari_blocks function, this function uses unwrap in multiple places and panics if the height isn't reached, which could lead to unhandled crashes.
-pub async fn verify_block_height(world: &mut TariWorld, p2pool_name: String, height: u64) { +pub async fn verify_block_height(world: &mut TariWorld, p2pool_name: String, height: u64) -> anyhow::Result<()> { debug!(target: LOG_TARGET, "verify '{}' is at height {}", p2pool_name, height); let start = Instant::now(); - let p2pool_process = world.get_p2pool_node(&p2pool_name).unwrap(); + let p2pool_process = world.get_p2pool_node(&p2pool_name) + .ok_or_else(|| anyhow::anyhow!("P2Pool node '{}' not found", p2pool_name))?; if !p2pool_process.config.http_server.enabled { - panic!("p2pool node '{}' doesn't have the http server enabled", p2pool_name); + return Err(anyhow::anyhow!("p2pool node '{}' doesn't have the http server enabled", p2pool_name)); } let stats_url = format!("http://127.0.0.1:{}/stats", p2pool_process.config.http_server.port); let p2pool_client = Client::new(); let mut local_height = 0; let mut stats: Value = Value::Null; for i in 0..(THIRTY_SECONDS_WITH_HALF_SECOND_SLEEP) { - let response = p2pool_client.get(stats_url.clone()).send().await.unwrap(); + let response = p2pool_client.get(stats_url.clone()).send().await + .map_err(|e| anyhow::anyhow!("Failed to send request to {}: {}", stats_url, e))?; if response.status().is_success() { - stats = response.json().await.unwrap(); - local_height = stats["sha3x_stats"]["height"].as_u64().unwrap() + 1; + stats = response.json().await + .map_err(|e| anyhow::anyhow!("Failed to parse JSON response from {}: {}", stats_url, e))?; + local_height = stats["sha3x_stats"]["height"].as_u64() + .ok_or_else(|| anyhow::anyhow!("Missing or invalid height in response"))? + 1; if local_height == height { debug!( target: LOG_TARGET, "Node '{}' is at height '{}' (waited {:.2?})", p2pool_name, local_height, start.elapsed() ); - return; + return Ok(()); } else if local_height > height { - panic!( - "Node '{}' busted its height - should be at '{}', now at '{}' (waited {:.2?})", - p2pool_name, height, local_height, start.elapsed() - ); + return Err(anyhow::anyhow!( + "Node '{}' busted its height - should be at '{}', now at '{}' (waited {:.2?})", + p2pool_name, height, local_height, start.elapsed() + )); } else { debug!( target: LOG_TARGET, "{}: '{}' is at height {}, need to be at {}", i, p2pool_name, local_height, height ); } } else { - panic!("Failed to query {} for stats: {}", stats_url, response.status()); + return Err(anyhow::anyhow!("Failed to query {} for stats: {}", stats_url, response.status())); } tokio::time::sleep(Duration::from_millis(HALF_SECOND)).await; } error!(target: LOG_TARGET, "Height not achieved. Stats: {:?}", stats); - panic!( - "p2pool node '{}' didn't synchronize successfully at height {}, current chain height {}", - p2pool_name, height, local_height - ); + Err(anyhow::anyhow!( + "p2pool node '{}' didn't synchronize successfully at height {}, current chain height {}", + p2pool_name, height, local_height + )) }
🧹 Nitpick comments (11)
p2pool/src/cli/commands/util.rs (1)
185-193
: Consider adding a success log message.While there's a warning for failures, adding a success log message when the libp2p info is successfully saved would improve debugging and provide confirmation that the operation worked as expected.
Apply this diff:
if let Err(err) = libp2p_info.save_to_file(&path) { warn!(target: LOG_TARGET, "Failed to save libp2p info to file: '{}'", err); + } else { + info!(target: LOG_TARGET, "Successfully saved libp2p info to file: '{}'", path.display()); }integration_tests/README.md (2)
5-5
: Add a comma after "form" for improved readability.In its simplest form, you can run the tests from the project route with
cargo test --release --test cucumber
-In its simplest form you can run the tests from the project route with `cargo test --release --test cucumber` +In its simplest form, you can run the tests from the project route with `cargo test --release --test cucumber`🧰 Tools
🪛 LanguageTool
[uncategorized] ~5-~5: Possible missing comma found.
Context: ...s ## Procedure to run In its simplest form you can run the tests from the project ...(AI_HYDRA_LEO_MISSING_COMMA)
30-30
: Add a comma after "files" for improved readability.To run a specific file or files, add
-- --input glob
to the command line-To run a specific file or files add `-- --input glob` to the command line +To run a specific file or files, add `-- --input glob` to the command line🧰 Tools
🪛 LanguageTool
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add-- --input glob
to the command li...(AI_HYDRA_LEO_MISSING_COMMA)
integration_tests/.husky/_/husky.sh (1)
7-7
: Declare and assign separately to avoid masking return values.ShellCheck warns about potential issues with combining declaration and assignment for command substitution.
-readonly hook_name="$(basename "$0")" +hook_name="$(basename "$0")" +readonly hook_name🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
integration_tests/cargo.toml (1)
28-28
: Consider adding test-only flag to cucumber dependency.Since cucumber is only used for testing, it would be better to move it to a [dev-dependencies] section or add test-only flag.
-cucumber = { version = "0.20.0", features = ["default", "libtest", "output-junit"] } +cucumber = { version = "0.20.0", features = ["default", "libtest", "output-junit"], test-only = true }integration_tests/tests/steps/mod.rs (1)
12-14
: Clarify constant values and units.The constants have some inconsistencies:
TWO_MINUTES_WITH_HALF_SECOND_SLEEP
is set to 240, which equals 4 minutes (not 2 minutes)HALF_SECOND
is set to 500, but the unit is not specified in the name (milliseconds)For better clarity:
pub const CONFIRMATION_PERIOD: u64 = 4; -pub const TWO_MINUTES_WITH_HALF_SECOND_SLEEP: u64 = 240; -pub const HALF_SECOND: u64 = 500; +pub const TWO_MINUTES_WITH_HALF_SECOND_SLEEP: u64 = 120; // 240 steps with half-second sleeps +pub const HALF_SECOND_MS: u64 = 500;Or add comments explaining the reasoning behind the values.
integration_tests/src/p2pool_process.rs (5)
32-32
: Consider using a more descriptive type alias.The
ShaP2PoolGrpcClient
type alias is highly specific but its name doesn't clearly express that it includes authentication.-pub type ShaP2PoolGrpcClient = ShaP2PoolClient<InterceptedService<TonicChannel, ClientAuthenticationInterceptor>>; +pub type AuthenticatedShaP2PoolClient = ShaP2PoolClient<InterceptedService<TonicChannel, ClientAuthenticationInterceptor>>;
70-71
: Function is marked with #[allow(clippy::too_many_lines)].This function is complex and exceeds Clippy's recommended line count. Consider refactoring it into smaller, more focused functions.
The function could be broken down into:
- A function for port and config setup
- A function for directory creation
- A function for setting up the command and spawning the process
113-138
: Match expression could be simplified.The match expression contains complex error handling that could be simplified with the
?
operator or more idiomatic Rust.- match fs::exists(temp_dir_path.clone()) { - Ok(false) => { - if let Err(err) = fs::create_dir_all(&temp_dir_path) { - panic!( - "Failed to create temp_dir_path at: '{}', error: {}", - temp_dir_path.display(), - err - ); - } - if let Err(err) = fs::create_dir_all(temp_dir_path.join("configs")) { - panic!( - "Failed to create configs dir at: '{}', error: {}", - temp_dir_path.join("configs").display(), - err - ); - } - }, - Err(err) => { - panic!( - "Failed to check if temp_dir_path exists at: '{}', error: {}", - temp_dir_path.display(), - err - ); - }, - _ => {}, - } + // Create necessary directories if they don't exist + if let Ok(false) = fs::exists(temp_dir_path.clone()) { + fs::create_dir_all(&temp_dir_path).unwrap_or_else(|err| { + panic!( + "Failed to create temp_dir_path at: '{}', error: {}", + temp_dir_path.display(), + err + ); + }); + + fs::create_dir_all(temp_dir_path.join("configs")).unwrap_or_else(|err| { + panic!( + "Failed to create configs dir at: '{}', error: {}", + temp_dir_path.join("configs").display(), + err + ); + }); + }
223-227
: Consider using unwrap_or_else for command.spawn() error handling.For consistency with other error handling patterns in this file, use
unwrap_or_else
for spawn error handling.- match command.spawn() { - Ok(child) => child, - Err(err) => panic!("Failed to start p2pool node '{}': {}", name_cloned, err), - } + command.spawn().unwrap_or_else(|err| panic!("Failed to start p2pool node '{}': {}", name_cloned, err))
290-415
: Refactor to_args_command_line to reduce repetition.The function is very repetitive with similar code patterns repeated for each argument. Consider using a more concise approach.
You could use a helper function or macro to reduce repetition. Here's a simplified example:
pub fn to_args_command_line(args: StartArgs) -> Vec<String> { let mut args_vec = Vec::new(); - if let Some(base_dir) = args.base_dir { - args_vec.push(format!("--base-dir={}", base_dir.display())); - } - - if let Some(grpc_port) = args.grpc_port { - args_vec.push(format!("--grpc-port={}", grpc_port)); - } + // Helper macro to add optional arguments + macro_rules! add_optional_arg { + ($option:expr, $name:expr, $format:expr) => { + if let Some(value) = $option { + args_vec.push(format!("--{}={}", $name, $format(value))); + } + }; + } + + // Helper macro to add boolean flag arguments + macro_rules! add_flag_if_true { + ($condition:expr, $name:expr) => { + if $condition { + args_vec.push(format!("--{}", $name)); + } + }; + } + + // Add optional arguments + add_optional_arg!(args.base_dir, "base-dir", |v: PathBuf| v.display()); + add_optional_arg!(args.grpc_port, "grpc-port", |v: u16| v.to_string()); // ... other similar blocks + + // Add flags + add_flag_if_true!(args.no_default_seed_peers, "no-default-seed-peers"); + add_flag_if_true!(args.stable_peer, "stable-peer"); + // ... other flags args_vec }This is just one approach - you could also use a map or other data structure to encode the argument patterns.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (34)
.github/workflows/audit.yml
(1 hunks)cargo.toml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/cargo.toml
(1 hunks)integration_tests/log4rs/cucumber.yml
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/features/Sync.feature
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)p2pool/Cargo.toml
(2 hunks)p2pool/src/cli/args.rs
(4 hunks)p2pool/src/cli/commands/mod.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(6 hunks)p2pool/src/cli/mod.rs
(1 hunks)p2pool/src/lib.rs
(1 hunks)p2pool/src/main.rs
(1 hunks)p2pool/src/server/config.rs
(4 hunks)p2pool/src/server/grpc/p2pool.rs
(7 hunks)p2pool/src/server/http/stats_collector.rs
(1 hunks)p2pool/src/server/p2p/network.rs
(32 hunks)p2pool/src/server/p2p/peer_store.rs
(3 hunks)p2pool/src/server/server.rs
(1 hunks)p2pool/src/sharechain/in_memory.rs
(4 hunks)p2pool/src/sharechain/lmdb_block_storage.rs
(1 hunks)p2pool/src/sharechain/p2block.rs
(1 hunks)p2pool/src/sharechain/p2chain_level.rs
(1 hunks)src/cli/mod.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- src/cli/mod.rs
🚧 Files skipped from review as they are similar to previous changes (24)
- .github/workflows/audit.yml
- p2pool/src/server/server.rs
- p2pool/src/cli/mod.rs
- p2pool/Cargo.toml
- p2pool/src/server/http/stats_collector.rs
- p2pool/src/server/grpc/p2pool.rs
- p2pool/src/sharechain/p2chain_level.rs
- p2pool/src/sharechain/lmdb_block_storage.rs
- p2pool/src/cli/commands/mod.rs
- p2pool/src/sharechain/p2block.rs
- integration_tests/tests/steps/base_node_steps.rs
- integration_tests/tests/cucumber.rs
- p2pool/src/server/p2p/network.rs
- p2pool/src/server/config.rs
- integration_tests/log4rs/cucumber.yml
- p2pool/src/cli/args.rs
- p2pool/src/sharechain/in_memory.rs
- integration_tests/src/base_node_process.rs
- p2pool/src/server/p2p/peer_store.rs
- cargo.toml
- p2pool/src/lib.rs
- integration_tests/src/lib.rs
- integration_tests/src/world.rs
- integration_tests/tests/steps/p2pool_steps.rs
🧰 Additional context used
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate declare
options instead.
(SC2316)
🪛 LanguageTool
integration_tests/README.md
[uncategorized] ~5-~5: Possible missing comma found.
Context: ...s ## Procedure to run In its simplest form you can run the tests from the project ...
(AI_HYDRA_LEO_MISSING_COMMA)
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add -- --input glob
to the command li...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test (esmeralda)
🔇 Additional comments (10)
p2pool/src/main.rs (1)
13-13
: The CLI import path has been correctly updated.The import has been changed from the local
crate::cli
module tosha_p2pool::Cli
, aligning with the module restructuring mentioned in the PR summary. This is part of the refactoring that includes workspace adjustments to incorporate the new integration tests.p2pool/src/cli/commands/util.rs (2)
224-259
: LGTM: Well-structured LibP2pInfo implementation.The LibP2pInfo struct and its associated methods are well-designed for serializing and deserializing peer information. The timeout mechanism in read_from_file is particularly useful for handling race conditions when waiting for files to be created.
89-100
: LGTM: Good additional configuration parameters.The new configuration options for network silence delay and minimum target difficulties for SHA3 and RandomX are well-integrated into the server configuration builder.
integration_tests/tests/features/Sync.feature (1)
1-59
: LGTM! Well-structured test scenarios.The feature file defines clear test scenarios for p2pool node synchronization, with a logical progression of steps and appropriate assertions. The scenarios test both normal synchronization between nodes in the same squad and isolation between different squads.
integration_tests/cargo.toml (2)
11-22
:❓ Verification inconclusive
Consider a strategy for dependency version management.
All Tari dependencies are pinned to a specific Git revision. This is good for reproducibility but could make updates challenging.
🏁 Script executed:
#!/bin/bash # Check if the Git revision used for Tari dependencies is recent git ls-remote https://github.com/tari-project/tari.git 53c1d47971994b2ac8795768e4b8a44800ea9988Length of output: 97
Action Required: Dependency Version Management Strategy Review
The pinned dependencies in
integration_tests/cargo.toml
(lines 11-22) ensure reproducibility; however, they could hinder future updates. The verification command returned ambiguous output regarding the existence or recency of the specified commit hash. Please manually verify that the commit hash53c1d47971994b2ac8795768e4b8a44800ea9988
is correct and consider a more flexible version management strategy (e.g., using tags, branch references, or Cargo’s patch features) to ease future updates.
- File:
integration_tests/cargo.toml
(lines 11-22)- Action Items:
- Confirm that the current commit hash exists and is appropriate.
- Evaluate alternative dependency versioning strategies to balance reproducibility with update flexibility.
57-57
:❓ Verification inconclusive
Verify serde_json version specificity.
The serde_json dependency is pinned to a very specific version (1.0.128) compared to other dependencies. Ensure this is intentional.
🌐 Web query:
What is the latest version of serde_json?
💡 Result:
The latest version of
serde_json
as of February 2025 is 1.0.139, according to its GitHub release history[7] and documentation[4]. This version requires Rust compiler version 1.36 or newer[6].For developers using
serde_json
, key features include:
- Strongly typed JSON parsing/serialization using Serde's derive macros[5]
- Support for
no_std
environments throughserde_json_core
[1]- Zero-copy deserialization of JSON strings[1]
- Compact serialization format with minimal memory allocation[1]
The crate continues to be actively maintained as part of the Serde ecosystem, with regular updates to maintain compatibility with Rust's stable toolchain[1][8].
Citations:
- 1: https://docs.rs/serde-json-core/latest/serde_json_core/
- 2: https://github.com/nox/serde_json
- 3: https://users.rust-lang.org/t/what-is-the-correct-way-to-take-serde-json-into-use/31555
- 4: https://docs.rs/serde_json/latest/src/serde_json/lib.rs.html
- 5: https://serde.rs
- 6: https://crates.io/crates/serde_json
- 7: https://github.com/serde-rs/json/releases
- 8: https://crates.io/crates/serde
- 9: https://users.rust-lang.org/t/serde-json-issues-on-linux-json-macro/117534/4
Confirm serde_json version pinning and verify compatibility.
The dependency in integration_tests/cargo.toml is currently set to:
serde_json = "1.0.128"Note that as of February 2025, the latest version is 1.0.139. Please verify whether this specific version restriction is intentional (e.g., due to compatibility or stability requirements) or if you’d prefer to update to allow patch-level upgrades.
integration_tests/tests/steps/mod.rs (3)
16-19
: Potential overshadowing with function name.It appears both the "I wait {int} seconds" and "I wait {int} seconds and stop" steps define an async function named
wait_seconds
. In typical Rust modules, this results in a name conflict. Consider renaming one of them to avoid confusion.#[when(expr = "I wait {int} seconds")] -async fn wait_seconds(_world: &mut TariWorld, seconds: u64) { +async fn wait_seconds_for_test(_world: &mut TariWorld, seconds: u64) { tokio::time::sleep(Duration::from_secs(seconds)).await; }
21-24
: Duplicate function name.Similarly, consider giving the second function a unique identifier to differentiate from the first.
#[then(expr = "I wait {int} seconds and stop")] -async fn wait_seconds_and_stop(_world: &mut TariWorld, seconds: u64) { +async fn wait_seconds_and_stop(_world: &mut TariWorld, seconds: u64) { tokio::time::sleep(Duration::from_secs(seconds)).await; }
26-38
: Clean up commented code and improve error handling.The function has several issues:
- Contains commented-out code that should be removed.
- Uses
back()
to check the last error but doesn't clear it, which could affect subsequent tests.- Comments suggest incomplete implementation.
Consider this implementation:
#[then(regex = r"I receive an error containing '(.*)'")] async fn receive_an_error(world: &mut TariWorld, error: String) { - match world.errors.back() { + match world.errors.pop_back() { Some(err) => assert_eq!(err, &error), None => panic!("Should have received an error"), }; - - // No-op. - // Was not implemented in previous suite, gave it a quick try but missing other peices - - // assert!(world.errors.len() > 1); - // assert!(world.errors.pop_front().unwrap().contains(&error)) }Additionally, consider using
.contains()
instead of exact matching for more flexible error assertions:- Some(err) => assert_eq!(err, &error), + Some(err) => assert!(err.contains(&error), "Error '{}' does not contain '{}'", err, error),integration_tests/src/p2pool_process.rs (1)
276-286
: Add a short delay in the port-free check loop.The loop tries up to 20 times to rebind the ports without any sleep, which could lead to a tight CPU spin. Adding a small delay will avoid unnecessary CPU usage.
for attempts in 0..20 { + std::thread::sleep(std::time::Duration::from_millis(100)); if !p2p_port_closed && TcpListener::bind(("127.0.0.1", self.config.p2p_port)).is_ok() { p2p_port_closed = true; } if !grpc_port_closed && TcpListener::bind(("127.0.0.1", self.config.grpc_port)).is_ok() { grpc_port_closed = true; } if p2p_port_closed && grpc_port_closed { break; } }
66cacc5
to
3b58df2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (3)
p2pool/src/cli/commands/util.rs (2)
181-181
:⚠️ Potential issueFix incorrect squad calculation.
The expression
min(1, config.p2p_service.num_squads)
will always return 1, making the squad calculation incorrect. This effectively ignores the num_squads configuration.Apply this diff:
- (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % min(1, config.p2p_service.num_squads); + (*swarm.local_peer_id().to_bytes().last().unwrap_or(&0) as usize) % config.p2p_service.num_squads;
233-236
:⚠️ Potential issueReplace non-existent
fs::exists
with proper existence check.The
fs::exists
function doesn't exist in the standard library. UsePath::exists()
instead.- if let Ok(false) = fs::exists(parent) { + if !parent.exists() { fs::create_dir_all(parent)?; }integration_tests/tests/cucumber.rs (1)
155-155
: 🛠️ Refactor suggestionImprove file creation error handling
The file creation uses
unwrap()
, which will panic on error:- let file = fs::File::create("cucumber-output-junit.xml").unwrap(); + let output_path = std::env::var("CUCUMBER_OUTPUT_PATH") + .unwrap_or_else(|_| "cucumber-output-junit.xml".to_string()); + let file = fs::File::create(&output_path) + .unwrap_or_else(|e| panic!("Failed to create output file {}: {}", output_path, e));
🧹 Nitpick comments (9)
p2pool/src/cli/commands/util.rs (1)
224-259
: LibP2pInfo struct looks good, but consider handling more error cases.The new LibP2pInfo struct is well-designed for exporting peer information for testing. The read_from_file method has good timeout handling.
Consider adding more specific error handling for potential file read issues like permission problems:
pub fn read_from_file(path: &Path, timeout: Duration) -> io::Result<Self> { let start = Instant::now(); while !path.exists() { if start.elapsed() >= timeout { return Err(io::Error::new( io::ErrorKind::TimedOut, format!("Timeout waiting for '{}' to be created", path.display()), )); } sleep(Duration::from_millis(100)); } - let file = File::open(path)?; + let file = File::open(path).map_err(|e| { + io::Error::new( + e.kind(), + format!("Failed to open '{}': {}", path.display(), e), + ) + })?; let reader = BufReader::new(file); - let libp2p_info = serde_json::from_reader(reader)?; + let libp2p_info = serde_json::from_reader(reader).map_err(|e| { + io::Error::new( + io::ErrorKind::InvalidData, + format!("Failed to parse JSON from '{}': {}", path.display(), e), + ) + })?; Ok(libp2p_info) }integration_tests/.husky/_/husky.sh (1)
7-7
: Declare and assign separately to avoid masking return valuesThe current line:
readonly hook_name="$(basename "$0")"might mask the return value. Consider separating declaration and assignment:
- readonly hook_name="$(basename "$0")" + hook_name="$(basename "$0")" + readonly hook_name🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
integration_tests/tests/cucumber.rs (2)
35-46
: Consolidate output management functionsThe
flush_stdout
and_flush_stdout
functions have overlapping functionality. Consider consolidating them:-fn _flush_stdout(buffer: &Arc<Mutex<Vec<u8>>>) { +fn flush_stdout_to_console(buffer: &Arc<Mutex<Vec<u8>>>) -> Result<(), String> { // After each test we flush the stdout to the logs. - let mut buffer = buffer.lock().unwrap(); - io::stdout().write_all(&buffer).unwrap(); - // io::stdout().flush().unwrap(); + let buffer_guard = buffer.lock().map_err(|e| format!("Failed to lock buffer: {}", e))?; + io::stdout().write_all(&buffer_guard) + .map_err(|e| format!("Failed to write to stdout: {}", e))?; + let content = str::from_utf8(&buffer_guard) + .map_err(|e| format!("Invalid UTF-8: {}", e))?; info!( target: LOG_TARGET_STDOUT, - "{}", - str::from_utf8(&buffer).unwrap() + "{}", content ); - buffer.clear(); + let mut buffer_guard = buffer.lock().map_err(|e| format!("Failed to lock buffer: {}", e))?; + buffer_guard.clear(); + Ok(()) }
125-127
: Remove commented-out codeRemove or implement the commented-out code in the after hook:
- // let stdout_buffer = stdout_buffer_clone.clone(); Box::pin(async move { - // flush_stdout(&stdout_buffer);integration_tests/log4rs/cucumber.yml (2)
69-74
: Remove or document commented-out configurationEither remove the commented-out root logger configuration or add a comment explaining why it's kept for reference:
-# We don't want prints during cucumber test, everything useful will in logs. -# root: -# level: warn -# appenders: -# - stdout +# Alternative restrictive root logger configuration +# Uncomment to prevent most output from going to stdout: +# root: +# level: warn +# appenders: +# - stdout
82-91
: Add documentation for logger componentsAdd brief comments explaining what each logger component is responsible for to help developers understand the logging structure:
loggers: # stdout stdout: + # Captures and logs standard output from test execution level: info # we have only single print, and it's info appenders: - stdout # cucumber cucumber: + # Logs from cucumber test framework and test scenarios level: debug appenders: - cucumber additive: false # base_node comms: + # Communication related logs between nodes and components level: debug appenders: - base_node additive: falseAlso applies to: 93-118
integration_tests/src/p2pool_process.rs (1)
229-230
: Handle potential I/O errors.
Reading libp2p info usingunwrap()
can panic if the file is missing or cannot be read. Use proper error handling to prevent unexpected crashes in test environments.- let libp2p_info = LibP2pInfo::read_from_file(&temp_dir_path.join(LIBP2P_INFO_FILE), Duration::from_secs(30)).unwrap(); + let libp2p_info = match LibP2pInfo::read_from_file(&temp_dir_path.join(LIBP2P_INFO_FILE), Duration::from_secs(30)) { + Ok(info) => info, + Err(err) => panic!("Failed to read libp2p info: {:?}", err), + };integration_tests/src/miner.rs (1)
85-99
: Revisit the 100-million iteration limit.
The loop can theoretically attempt up to 100 million nonces, which might stall the CI in rare circumstances. Consider a faster fail or a dynamic approach.integration_tests/tests/steps/p2pool_steps.rs (1)
19-20
: Avoid relying on fixed sleep durations.
Using a fixed 2-second wait for node initialization and synchronization may cause flaky tests or wasted time. Consider waiting for a condition (e.g., health checks) rather than a fixed delay.Also applies to: 26-27, 33-34, 40-41, 47-48
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (34)
.github/workflows/audit.yml
(1 hunks)cargo.toml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/cargo.toml
(1 hunks)integration_tests/log4rs/cucumber.yml
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/features/Sync.feature
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)p2pool/Cargo.toml
(2 hunks)p2pool/src/cli/args.rs
(4 hunks)p2pool/src/cli/commands/mod.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(6 hunks)p2pool/src/cli/mod.rs
(1 hunks)p2pool/src/lib.rs
(1 hunks)p2pool/src/main.rs
(1 hunks)p2pool/src/server/config.rs
(4 hunks)p2pool/src/server/grpc/p2pool.rs
(7 hunks)p2pool/src/server/http/stats_collector.rs
(1 hunks)p2pool/src/server/p2p/network.rs
(32 hunks)p2pool/src/server/p2p/peer_store.rs
(3 hunks)p2pool/src/server/server.rs
(1 hunks)p2pool/src/sharechain/in_memory.rs
(4 hunks)p2pool/src/sharechain/lmdb_block_storage.rs
(1 hunks)p2pool/src/sharechain/p2block.rs
(1 hunks)p2pool/src/sharechain/p2chain_level.rs
(1 hunks)src/cli/mod.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- src/cli/mod.rs
🚧 Files skipped from review as they are similar to previous changes (22)
- p2pool/src/server/server.rs
- p2pool/src/cli/commands/mod.rs
- p2pool/src/server/grpc/p2pool.rs
- p2pool/src/sharechain/lmdb_block_storage.rs
- p2pool/src/sharechain/p2chain_level.rs
- cargo.toml
- p2pool/src/server/p2p/network.rs
- p2pool/Cargo.toml
- p2pool/src/sharechain/p2block.rs
- integration_tests/tests/features/Sync.feature
- p2pool/src/server/http/stats_collector.rs
- p2pool/src/server/p2p/peer_store.rs
- p2pool/src/cli/mod.rs
- integration_tests/src/base_node_process.rs
- p2pool/src/lib.rs
- integration_tests/cargo.toml
- integration_tests/tests/steps/base_node_steps.rs
- integration_tests/tests/steps/mod.rs
- p2pool/src/sharechain/in_memory.rs
- integration_tests/src/lib.rs
- .github/workflows/audit.yml
- p2pool/src/server/config.rs
🧰 Additional context used
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate declare
options instead.
(SC2316)
🪛 LanguageTool
integration_tests/README.md
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add -- --input glob
to the command li...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: test (esmeralda)
🔇 Additional comments (12)
p2pool/src/main.rs (1)
13-13
: CLI Import Update: Looks GoodThe import change from a local crate reference to
sha_p2pool::Cli
aligns with the broader module reorganization, centralizing CLI functionality in the main crate.p2pool/src/cli/commands/util.rs (1)
185-193
: Good addition of LibP2p info export for testing.The new functionality to export LibP2p information to a file is valuable for integration testing, particularly for cucumber tests as mentioned in the PR. The error handling with warning logs is appropriate.
p2pool/src/cli/args.rs (2)
141-156
: New CLI arguments align well with integration testing needs.The added parameters for LibP2p info export, network silence delay, and minimum target difficulties provide valuable functionality for testing scenarios. The comments are clear and descriptive.
228-243
: Good refactoring of command handling logic.Extracting the command handling logic into a standalone
run_with_cli
function is a clean refactoring that improves code maintainability and reusability. This approach allows the function to be called from other contexts beyond the CLI struct's own method.integration_tests/README.md (1)
11-11
: Replace placeholder "?????????" with actual examples.All example commands currently contain placeholder text. Replace these with realistic examples to make the documentation more useful for developers.
For example:
-cargo test --release --test cucumber -- --name "?????????" +cargo test --release --test cucumber -- --name "New node should sync with peers and propagate blocks"Similarly update all other instances of "?????????" in the file.
Also applies to: 14-14, 21-21, 24-24, 27-27, 34-34, 37-37, 40-40
integration_tests/.husky/_/husky.sh (1)
20-20
: Correct the export of a read-only variableThe current line:
export readonly husky_skip_init=1incorrectly combines
export
withreadonly
and results in ShellCheck error SC2316. Instead, separate the two operations:- export readonly husky_skip_init=1 + readonly husky_skip_init=1 + export husky_skip_init🧰 Tools
🪛 Shellcheck (0.10.0)
[error] 20-20: This applies export to the variable named readonly, which is probably not what you want. Use a separate command or the appropriate
declare
options instead.(SC2316)
integration_tests/tests/cucumber.rs (2)
3-4
: Document usage of the unstable featureinternal_output_capture
The code uses an unstable Rust feature which may change or break in future Rust versions. Add a comment explaining why this feature is necessary and potential migration plans:
+// This unstable feature is required to capture output during test execution +// It's an internal implementation detail of the Rust standard library used for test infrastructure +// TODO: Monitor for stabilization or alternative approaches if this feature changes #![feature(internal_output_capture)]
106-111
: Improve logging configuration error handling and flexibilityThe logging configuration has hard-coded paths and minimal error handling:
+const DEFAULT_LOG_CONFIG_PATH: &str = "log4rs/cucumber.yml"; +const DEFAULT_LOG_CONFIG_DIR: &str = "./"; + +fn setup_logging() -> Result<(), Box<dyn std::error::Error>> { + let config_path = std::env::var("LOG_CONFIG_PATH") + .unwrap_or_else(|_| DEFAULT_LOG_CONFIG_PATH.to_string()); + let config_dir = std::env::var("LOG_CONFIG_DIR") + .unwrap_or_else(|_| DEFAULT_LOG_CONFIG_DIR.to_string()); + + initialize_logging( + &PathBuf::from(config_path), + &PathBuf::from(config_dir), + include_str!("../log4rs/cucumber.yml"), + )?; + Ok(()) +} + -initialize_logging( - &PathBuf::from("log4rs/cucumber.yml"), - &PathBuf::from("./"), - include_str!("../log4rs/cucumber.yml"), -) -.expect("logging not configured"); +setup_logging().expect("Failed to configure logging");integration_tests/src/p2pool_process.rs (1)
275-286
: Add a short delay in the port-free check loop.
This loop rebinds the ports up to 20 times without any delay, potentially causing a temporary high CPU spin.integration_tests/src/miner.rs (1)
64-65
: Use consistent wallet payment addresses for block mining and submission.
This code generates a new address insubmit_block
, which differs from the one inget_new_block
. Consider using the same address for consistency.integration_tests/src/world.rs (2)
30-31
: Fix incorrect error message.
This error variant says "client found" instead of "client not found."
50-51
: Remove debug println statement.
This extra println pollutes test output and contradicts the in-code comment telling you to remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (14)
p2pool/src/cli/commands/util.rs (1)
224-259
: Well-designed LibP2pInfo implementation with good serialization support.The LibP2pInfo struct and its methods provide clean serialization/deserialization with appropriate error handling and timeout support. The file path existence check correctly uses
parent.exists()
instead of a non-existentfs::exists
function.Two suggestions for future improvements:
- Consider using file system notifications instead of the polling approach in
read_from_file
for better efficiency- Add an option to prevent overwriting existing files in
save_to_file
if neededintegration_tests/README.md (1)
5-5
: Add a comma before "you".The sentence "In its simplest form you can run..." should have a comma for proper grammatical separation.
-In its simplest form you can run the tests from the project route with `cargo test --release --test cucumber` +In its simplest form, you can run the tests from the project route with `cargo test --release --test cucumber`🧰 Tools
🪛 LanguageTool
[uncategorized] ~5-~5: Possible missing comma found.
Context: ...s ## Procedure to run In its simplest form you can run the tests from the project ...(AI_HYDRA_LEO_MISSING_COMMA)
integration_tests/.husky/_/husky.sh (1)
7-7
: Declare and assign separately to avoid masking return values.Using
readonly
with assignment in a single statement can mask return values of the command.- readonly hook_name="$(basename "$0")" + hook_name="$(basename "$0")" + readonly hook_name🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
integration_tests/tests/cucumber.rs (2)
88-93
: Consider making log config paths configurable via environment variables.The logging configuration uses hard-coded paths which could limit flexibility in different environments.
+ let log_config_path = std::env::var("LOG_CONFIG_PATH") + .unwrap_or_else(|_| "log4rs/cucumber.yml".to_string()); + let log_config_dir = std::env::var("LOG_CONFIG_DIR") + .unwrap_or_else(|_| "./".to_string()); + initialize_logging( - &PathBuf::from("log4rs/cucumber.yml"), - &PathBuf::from("./"), + &PathBuf::from(log_config_path), + &PathBuf::from(log_config_dir), include_str!("../log4rs/cucumber.yml"), ) .expect("logging not configured");
135-136
: Make output file path configurable.The JUnit XML output file path is hard-coded. Consider making it configurable via an environment variable.
+ let output_file = std::env::var("CUCUMBER_OUTPUT_FILE") + .unwrap_or_else(|_| "cucumber-output-junit.xml".to_string()); - let file = fs::File::create("cucumber-output-junit.xml") - .unwrap_or_else(|e| panic!("Failed to create output file 'cucumber-output-junit.xml': {}", e)); + let file = fs::File::create(&output_file) + .unwrap_or_else(|e| panic!("Failed to create output file '{}': {}", output_file, e));integration_tests/cargo.toml (1)
11-23
: Add version pinning comment.All Tari project dependencies use the same Git commit revision. Consider adding a comment explaining this pinning strategy and how/when to update the revision.
# From tari base layer project +# Note: All Tari dependencies are pinned to the same revision. +# Update this revision when integrating with newer versions of the base layer. minotari_app_grpc = { git = "https://github.com/tari-project/tari.git", rev = "53c1d47971994b2ac8795768e4b8a44800ea9988" }integration_tests/tests/steps/mod.rs (2)
16-24
: Consider consolidating or renaming the two step definitions for waiting.Currently, both steps – "I wait {int} seconds" and "I wait {int} seconds and stop" – merely sleep for a specified duration, with no functional difference in behavior. Consider consolidating them into a single step definition, or introduce the actual stopping logic to distinguish the "stop" variant. This will reduce potential confusion and duplication in your test steps.
26-32
: Validate error context for improved debugging.When popping the last error from
world.errors
, comparing the entire string can be sufficient for many scenarios, but partial matching or capturing more context (e.g., stack traces) may provide better diagnostics for integration tests. Consider adding additional assertion details or logging for deeper insights when tests fail.integration_tests/src/base_node_process.rs (2)
63-67
: Ensure final cleanup logic is intentional.This
Drop
implementation automatically callskill()
, which enters an infinite loop (see below) until ports become available again. This can hang if the port remains blocked for reasons outside your test environment. If you intend to guarantee immediate reuse of ports, consider adding a timeout or error logging in case the ports remain blocked indefinitely.
230-243
: Add a delay or maximum retries to avoid tight spin.The kill function’s loops currently attempt to rebind the ports continuously, which may cause a high CPU usage in rare edge cases where ports don’t free up promptly. Consider adding a short delay and/or implementing a timeout:
loop { + std::thread::sleep(std::time::Duration::from_millis(100)); if TcpListener::bind(("127.0.0.1", self.port)).is_ok() { break; } }
This approach both reduces spinning and provides a graceful exit when ports stay bound unexpectedly.
integration_tests/src/p2pool_process.rs (2)
261-290
: Confirm kill loop logic is consistent across processes.Here, you already implement a maximum of 20 attempts and add a 100 ms delay for each attempt, which is helpful in preventing infinite loops. However, the
base_node_process.rs
kill loop has a different approach, potentially causing inconsistency. Consider aligning both modules to maintain consistent cleanup behavior for your tests.
433-493
: Panic-based error handling is acceptable, but consider structured errors for clarity.Your integration test uses
panic!
calls when the expected conditions aren’t met (e.g., peer not found, response is invalid, status is not successful). While this is common in test code, you may consider returning structured errors (e.g., custom error messages with context) to aid debugging and clarity when a test fails. This can be especially helpful if you anticipate triaging frequent or complex failures.integration_tests/tests/steps/p2pool_steps.rs (2)
18-25
: Consider extracting common error handling into a helper function.All five step functions have identical error handling patterns: format the error, log it, and panic. This could be extracted into a helper function to reduce code duplication.
Example implementation:
fn handle_step_error<T>(result: Result<T, impl std::fmt::Display>, step_name: &str) -> T { match result { Ok(value) => value, Err(err) => { let msg = format!("{}: {}", step_name, err); error!(target: LOG_TARGET, "{}", msg); panic!("{}", msg); } } }Then you could simplify each step function:
async fn start_p2pool_seed_node(world: &mut TariWorld, p2pool_name: String, squad: String, base_node_name: String) { - if let Err(err) = spawn_p2pool_node_and_wait_for_start(world, true, p2pool_name, squad, base_node_name).await { - let msg = format!("start_p2pool_seed_node: {}", err); - error!(target: LOG_TARGET, "{}", msg); - panic!("{}", msg); - } + handle_step_error( + spawn_p2pool_node_and_wait_for_start(world, true, p2pool_name, squad, base_node_name).await, + "start_p2pool_seed_node" + ); tokio::time::sleep(Duration::from_secs(2)).await; }
24-25
: Consider making sleep duration configurable.Fixed sleep durations (2 seconds in this case) can be problematic in test environments with varying performance characteristics. Consider making this configurable or adaptive based on the environment.
+ const STEP_SLEEP_DURATION: u64 = 2; // seconds // Then in each step function: - tokio::time::sleep(Duration::from_secs(2)).await; + tokio::time::sleep(Duration::from_secs(STEP_SLEEP_DURATION)).await;This makes it easier to adjust the timing globally if needed.
Also applies to: 35-36, 46-47, 57-58, 68-69
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (15)
.github/workflows/audit.yml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/cargo.toml
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(6 hunks)p2pool/src/server/p2p/network.rs
(31 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- .github/workflows/audit.yml
- integration_tests/tests/steps/base_node_steps.rs
- p2pool/src/server/p2p/network.rs
- integration_tests/src/lib.rs
- integration_tests/src/world.rs
🧰 Additional context used
🧠 Learnings (2)
integration_tests/tests/cucumber.rs (1)
Learnt from: hansieodendaal
PR: tari-project/sha-p2pool#254
File: integration_tests/tests/cucumber.rs:25-33
Timestamp: 2025-02-26T07:13:31.464Z
Learning: Using `unwrap()` in test code is acceptable in this project, as test code doesn't need the same level of error handling as production code.
integration_tests/src/p2pool_process.rs (1)
Learnt from: hansieodendaal
PR: tari-project/sha-p2pool#254
File: integration_tests/tests/cucumber.rs:25-33
Timestamp: 2025-02-26T07:13:31.464Z
Learning: Using `unwrap()` in test code is acceptable in this project, as test code doesn't need the same level of error handling as production code.
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
🪛 LanguageTool
integration_tests/README.md
[uncategorized] ~5-~5: Possible missing comma found.
Context: ...s ## Procedure to run In its simplest form you can run the tests from the project ...
(AI_HYDRA_LEO_MISSING_COMMA)
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: test (esmeralda)
- GitHub Check: clippy
🔇 Additional comments (11)
p2pool/src/cli/commands/util.rs (4)
55-59
: Nice enhancement to support detailed logging for integration tests.This conditional log configuration selection is a good approach to enable different logging levels based on environment context.
89-100
: Good addition of configurable parameters.The additional parameters for network silence delay and minimum target difficulties provide better flexibility for testing and deployment scenarios.
181-181
: Fixed squad calculation issue correctly.The change from a potential
min
function tomax(1, config.p2p_service.num_squads)
properly addresses the issue where squad calculation would always use 1 as the divisor, effectively ignoring the configured number of squads.
185-193
: Proper error handling for libp2p info export.Good implementation of the export functionality with appropriate error handling and logging.
integration_tests/tests/cucumber.rs (2)
3-3
: Document the usage ofinternal_output_capture
unstable feature.Add a comment explaining why this unstable feature is necessary and any future migration plans.
+// This unstable feature is used to capture output during test execution. +// It's an internal Rust test harness feature that allows redirecting stdout. +// TODO: Monitor for stabilization or alternative approaches in future Rust versions. #![feature(internal_output_capture)]
42-43
:✅ Verification successful
Verify exact path of p2pool executable.
The function
get_p2pool_exe_path()
is used to locate the executable, but we should verify its implementation to ensure it correctly finds the path across different environments.
🏁 Script executed:
#!/bin/bash # Check the p2pool_process.rs file to see how get_p2pool_exe_path is implemented rg -A 10 "fn get_p2pool_exe_path" integration_tests/src/p2pool_process.rsLength of output: 438
P2pool Executable Path Correctly Configured
The implementation ofget_p2pool_exe_path()
inintegration_tests/src/p2pool_process.rs
correctly handles both Windows and non-Windows environments by joining the crate's manifest directory with the appropriate path (../target/release/sha_p2pool.exe
for Windows and../target/release/sha_p2pool
otherwise). This confirms that the executable path is constructed as expected across different environments. No further changes are necessary.integration_tests/cargo.toml (1)
64-64
: Excellent configuration for the Cucumber test harness.Setting
harness = false
is correct for allowing Cucumber to manage test output instead of the default Rust test framework.integration_tests/src/miner.rs (2)
86-106
: Good improvements to error handling in find_sha3x_header_with_achieved_difficulty.The function now properly returns errors instead of panicking, which is a great improvement over earlier versions. The
TestResult<()>
return type makes the function more composable with other error-handling code.
27-72
: Well-implemented error handling in mine_and_submit_tari_blocks.This function demonstrates good error handling practices throughout. You've correctly implemented the suggestions from past reviews with proper error propagation using
map_err
and the?
operator rather than unwrap calls. You're also correctly using the same wallet address for both getting and submitting blocks, which is an improvement.integration_tests/tests/steps/p2pool_steps.rs (2)
40-47
: Good usage of the mine_and_submit_tari_blocks function.This step function integrates well with the miner functionality, properly handling any errors that might occur during block mining and submission.
49-58
: Well-structured block height verification step.This step function properly implements the verification of block height, making good use of the verify_block_height function from the miner module.
593a3e5
to
9ce8018
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (7)
integration_tests/.husky/_/husky.sh (1)
7-7
: Separate declaration and assignment for readonly variables.To avoid potentially masking return values and align with shell best practices, separate the declaration and assignment of the readonly variable.
- readonly hook_name="$(basename "$0")" + hook_name="$(basename "$0")" + readonly hook_name🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
integration_tests/src/miner.rs (3)
27-72
: Consider specifying request timeouts for p2pool client calls.While this code properly handles errors from the p2pool gRPC calls, it may hang indefinitely if the server becomes unresponsive. To improve resilience and avoid blocking test runs, consider specifying a request timeout in the gRPC client or implementing a timeout strategy in this loop.
86-106
: Reconsider 100 million nonce tries.A limit of 100,000,000 iterations in
find_sha3x_header_with_achieved_difficulty
may be excessive for typical tests, potentially causing very long loops if conditions are unfavorable. You could reduce it to a smaller limit or introduce a random start nonce to expedite finding a valid difficulty.- if num_tries > 100_000_000 { + if num_tries > 1_000_000 {
108-166
: Use flexible wait strategies in height verification.The polling approach using fixed half-second sleeps is reliable but can slow test runs when concurrency or parallel tests grow. An optional backoff or shorter interval could improve performance without sacrificing correctness.
integration_tests/tests/steps/p2pool_steps.rs (1)
16-36
: Refactor repeated error handling and sleep pattern.Both
start_p2pool_seed_node
andstart_p2pool_node
follow the same structure of handling errors by logging + panicking, then sleeping 2 seconds. You could unify this logic into a helper function or remove the arbitrary sleep ifspawn_p2pool_node_and_wait_for_start
already ensures readiness.integration_tests/src/world.rs (2)
26-34
: Consolidate or remove the unusedP2PoolNotFound
variant.The
P2PoolNotFound(String)
variant appears unused. Having bothP2PoolProcessNotFound
andP2PoolNotFound
can confuse maintainers. Merge them if they refer to the same condition, or remove the unused variant to keep your error definitions clear.#[derive(Error, Debug)] pub enum TariWorldError { #[error("P2Pool process not found: {0}")] P2PoolProcessNotFound(String), - #[error("P2Pool client not found: {0}")] - P2PoolNotFound(String), #[error("Base node process not found: {0}")] BaseNodeProcessNotFound(String), }
108-120
: Optionally handle errors on node shutdown.Calling
kill()
is suitable for test teardown, but further error reporting or retries could be beneficial if node shutdown fails. For now, it’s acceptable for integration tests.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (18)
.github/workflows/audit.yml
(1 hunks)Cargo.toml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/Cargo.toml
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)null
(1 hunks)p2pool/src/cli/commands/mod.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(6 hunks)p2pool/src/server/p2p/network.rs
(31 hunks)
✅ Files skipped from review due to trivial changes (1)
- null
🚧 Files skipped from review as they are similar to previous changes (6)
- .github/workflows/audit.yml
- p2pool/src/server/p2p/network.rs
- integration_tests/src/base_node_process.rs
- integration_tests/tests/cucumber.rs
- integration_tests/tests/steps/base_node_steps.rs
- integration_tests/tests/steps/mod.rs
🧰 Additional context used
🧠 Learnings (1)
integration_tests/src/p2pool_process.rs (1)
Learnt from: hansieodendaal
PR: tari-project/sha-p2pool#254
File: integration_tests/tests/cucumber.rs:25-33
Timestamp: 2025-02-26T07:13:31.464Z
Learning: Using `unwrap()` in test code is acceptable in this project, as test code doesn't need the same level of error handling as production code.
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 7-7: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 17-17: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
🪛 LanguageTool
integration_tests/README.md
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: test (esmeralda)
- GitHub Check: clippy
- GitHub Check: Cucumber tests / FFI
- GitHub Check: cargo check with stable
🔇 Additional comments (33)
integration_tests/README.md (1)
1-52
: Well-structured documentation for cucumber tests!The documentation is comprehensive and provides clear instructions for running tests with different options. The examples use actual test names rather than placeholders, making it more helpful for developers.
I appreciate the inclusion of the CI section that clarifies which tests will run in different contexts (PR, nightly, weekly).
🧰 Tools
🪛 LanguageTool
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
p2pool/src/cli/commands/mod.rs (1)
4-14
: Appropriate module organization with controlled exports.The changes make the public API more explicit by selectively exporting specific entities instead of entire modules. This is good practice for API design and encapsulation.
integration_tests/.husky/_/husky.sh (1)
20-21
: Correctly handling readonly and export operations.Good implementation of the variable export - separating the readonly declaration from the export operation avoids ShellCheck error SC2316.
integration_tests/Cargo.toml (1)
1-65
: Well-structured package configuration with properly pinned dependencies.The Cargo.toml file is well-organized with:
- Clear package metadata
- Consistently pinned dependencies (using the same Git revision for all Tari dependencies)
- Appropriate test configuration for cucumber
This approach ensures reproducible builds and proper integration with the testing framework.
Cargo.toml (3)
1-6
: Clean workspace configuration transition!The transition from a package to a workspace configuration is well-structured and concise, allowing the project to manage multiple related packages (p2pool and integration_tests).
8-9
: Good choice of resolver version.Using resolver = "2" is appropriate for handling dependency resolution in a workspace with multiple packages, ensuring consistent behavior across the workspace.
11-14
: Important safety improvement for release builds.Enabling overflow checks in release mode is an excellent security practice that prevents silent integer overflows, which could lead to incorrect calculations or security vulnerabilities.
p2pool/src/cli/commands/util.rs (7)
55-59
: Good approach to conditional logging configuration.The conditional inclusion of detailed logging based on an environment variable allows for flexible debugging without code changes.
89-100
: Well-structured configuration extensions for testing.The new parameters for network silence delay and target difficulties provide important configuration options for integration testing scenarios.
181-181
: Fixed squad calculation bug.The previous implementation used
min(1, config.p2p_service.num_squads)
which would always return 1, effectively ignoring the configured number of squads. Usingmax(1, config.p2p_service.num_squads)
correctly ensures the squad ID is at least 1 while respecting the configuration.
185-193
: Good error handling for LibP2pInfo persistence.The implementation properly warns about failures when saving the libp2p info without interrupting the execution flow, which is appropriate for this context.
224-229
: Clean LibP2pInfo struct definition.The struct is well-designed with appropriate public fields for node identification and squad information, making it suitable for integration testing.
231-241
: Robust file saving implementation.The save_to_file method correctly handles directory creation if needed and properly serializes the data to a file.
243-258
: Effective timeout mechanism for file reading.The read_from_file method includes a good timeout implementation that waits for the file to exist before attempting to read it, preventing premature failures during testing.
integration_tests/src/lib.rs (4)
27-42
: Robust port selection with timeout.The get_port function effectively finds an available port within the specified range and includes a timeout mechanism to prevent infinite loops.
51-70
: Effective service availability check.The wait_for_service function properly verifies if a service is running by attempting TCP connections, with appropriate retry logic.
72-82
: Improved error handling in peer address retrieval.The implementation correctly uses filter_map to handle cases where specified peers don't exist, avoiding potential panics from unwraps.
84-94
: Consistent implementation of base node peer addresses retrieval.This function follows the same pattern as the p2pool version, ensuring consistent error handling across similar functions.
integration_tests/src/p2pool_process.rs (5)
33-45
: Well-designed P2PoolProcess struct.The structure contains all necessary fields for managing and interacting with a P2Pool node during testing.
46-50
: Automatic resource cleanup with Drop implementation.The Drop trait implementation ensures that P2Pool processes are properly terminated when they go out of scope, preventing orphaned processes during tests.
114-142
: Thorough directory existence and creation handling.The code correctly checks for directory existence and creates necessary directories with appropriate error handling.
264-294
: Robust process termination with port release verification.The kill method not only terminates the process but also verifies that the ports are actually released, which helps prevent port conflicts in subsequent tests.
436-499
: Improved peer connection verification.The function now returns a Result type instead of using unwrap calls, providing better error context when verification fails. This addresses previous feedback about error handling in test code.
integration_tests/src/miner.rs (1)
74-84
: Looks good for generating random test addresses.This approach is straightforward and correctly creates a new random dual address. No immediate changes are needed.
integration_tests/tests/steps/p2pool_steps.rs (3)
38-47
: Same recommendation about shared patterns.This function duplicates the pattern of error logging, panicking, and sleeping for two seconds. Please refer to the refactor suggestion in lines 16-36.
49-58
: Same recommendation about shared patterns.This function also logs errors, panics, then sleeps fixedly. For consistency and faster tests, refer to the refactor suggestion in lines 16-36.
60-69
: Same recommendation about shared patterns.This function likewise logs errors, panics, and sleeps for two seconds. See lines 16-36 for the refactor suggestion.
integration_tests/src/world.rs (6)
36-46
: World struct checks out.The
TariWorld
struct and its fields look appropriate for the integration test environment.
48-55
: Custom Debug implementation is fine.Restricting the debug output to
p2pool_nodes
andblocks
is clear and helps avoid overly verbose logs.
57-67
: p2pool client retrieval is well-handled.The use of
inspect_err
and explicit logging for error conditions is clean.
69-80
: Base node client retrieval is acceptable.Consistent error handling and logging approach mirror the p2pool client.
82-98
: Process getters appear correct.These straightforward getters raise errors when the node is not found; returning results is a good practice for integration tests.
100-106
: Scenario setup is handled properly.Setting up the
current_base_dir
by replacing spaces is helpful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
p2pool/src/cli/commands/util.rs (1)
243-258
: Consider adding exponential backoff to the file pollingThe current implementation polls every 100ms regardless of how long the operation has been running. For longer timeouts, consider implementing an exponential backoff strategy to reduce unnecessary polling.
- sleep(Duration::from_millis(100)); + let sleep_duration = Duration::from_millis(100).min((start.elapsed() / 2).min(Duration::from_secs(1))); + sleep(sleep_duration);log4rs_detailed.yml (2)
1-1
: Refresh Rate Format
The refresh rate is set as30 seconds
. For clarity and to avoid any YAML parsing ambiguities, consider quoting the value (e.g.,"30 seconds"
).
115-177
: Detailed Logger Configurations
The logger definitions for various components (e.g.,tari::p2pool
,tari::p2pool::server
, etc.) are comprehensive and well-organized. Each logger is set with an appropriate level and associated with a specific appender, and all haveadditive
set tofalse
to prevent duplicate logging.A few minor nitpicks:
- Indentation Consistency: Some logger blocks (e.g.,
tari::p2pool::server::p2p
andtari::p2pool::message_logging
) show slight variations in indentation compared to others. While YAML is flexible with whitespace, aligning the indentation consistently (e.g., using 2 spaces for keys and 4 spaces for properties) can improve readability.- Trailing Line: There appears to be an orphaned line number at the end (line 177). Ensure that the file ends cleanly without extraneous characters.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/ci.yml
(1 hunks)log4rs_detailed.yml
(1 hunks)p2pool/src/cli/commands/util.rs
(6 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: test (esmeralda)
- GitHub Check: machete
- GitHub Check: clippy
- GitHub Check: cargo check with stable
🔇 Additional comments (15)
.github/workflows/ci.yml (1)
111-111
: Lock cargo-machete version in CI step.Specifying the version using
--version 0.7.0
is a solid move to ensure reproducible builds and prevent future incompatibility issues caused by upstream changes in cargo-machete.p2pool/src/cli/commands/util.rs (6)
180-182
: Fixed the squad calculation 👍This change correctly resolves the previous issue with the squad calculation by using
max(1, config.p2p_service.num_squads)
instead ofmin(1, config.p2p_service.num_squads)
. The previous approach would always return 1, making the squad calculation ineffective.
185-193
: LibP2pInfo export implementation looks goodThe new code for exporting the LibP2pInfo to a file handles errors appropriately by logging but not blocking the server startup if the file can't be saved. This is useful for integration tests to track peer information.
232-236
: Proper path existence check implementedThis correctly uses
parent.exists()
rather than a non-existentfs::exists
function, resolving the previous issue.
89-100
: Good addition of network configuration parametersThese additional parameters for network silence delay and minimum target difficulties provide more control over the P2P network behavior, which will be useful for testing different network scenarios.
55-59
: Flexible logging configuration based on environment variableUsing the environment variable
PERFORM_DETAIL_LOGGING
to switch between logging configurations is a good approach that allows for more verbose logging during tests without modifying code.
224-228
: Well-designed LibP2pInfo structureThe struct is appropriately marked as public and includes serialization/deserialization support, making it useful for integration tests that need to track peer information across processes.
log4rs_detailed.yml (8)
4-11
: Stdout Appender Configuration
Thestdout
appender is clearly defined with a console kind, a well-formatted encoder pattern, and a threshold filter set to theinfo
level. Everything appears correct here.
12-27
: Rolling File Appender "other" Setup
The "other" appender is configured properly, with a rolling file path, an appropriate compound policy (size trigger and fixed window roller), and a clear encoder pattern.
28-43
: Sync Appender Configuration
The configuration for thesync
appender mirrors the structure of the "other" appender, with matching policy details and encoder format. This section is well implemented.
44-59
: P2P Appender Configuration
Thep2p
appender correctly sets up a rolling file with a size-based trigger and fixed window roller. The encoder pattern is consistent with other appenders.
60-75
: Peer Info Appender Setup
Thepeer_info
appender is properly defined with rolling file settings and an encoder. The configuration aligns well with the established pattern.
76-91
: Sharechain Appender Configuration
Thesharechain
appender is configured similarly to the other rolling file appenders, ensuring consistent log rotation and formatting.
92-107
: GRPC Appender Setup
Thegrpc
appender follows the same design pattern as the previous rolling file appenders. The fixed window roller and encoder configuration are appropriately defined.
108-113
: Root Logger Definition
The root logger is set to theinfo
level and directs logs to both thestdout
andother
appenders. This minimal configuration should cover general logging needs effectively.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (10)
integration_tests/tests/steps/p2pool_steps.rs (5)
15-24
: Consider using a more robust synchronization mechanism instead of fixed sleepFixed sleep durations can lead to flaky tests if operations take longer than expected. Consider using a polling mechanism or other synchronization primitives instead of the hard-coded 1-second sleep.
- tokio::time::sleep(Duration::from_secs(1)).await; + // Consider a polling approach that waits for specific conditions + // or a configurable sleep duration based on environment or test requirements
26-35
: Code duplication with start_p2pool_seed_nodeThis function is nearly identical to
start_p2pool_seed_node
except for theis_seed
parameter. Consider refactoring to reduce duplication.You could create a private helper function that both public functions call:
async fn start_p2pool_node_internal( world: &mut TariWorld, is_seed: bool, p2pool_name: String, squad: String, base_node_name: String ) { if let Err(err) = spawn_p2pool_node_and_wait_for_start(world, is_seed, p2pool_name, squad, base_node_name).await { let msg = format!("start_p2pool_node: {}", err); error!(target: LOG_TARGET, "{}", msg); panic!("{}", msg); } tokio::time::sleep(Duration::from_secs(1)).await; }
48-50
: Consider using #[then] for verification stepsVerification steps typically use
#[then]
in Cucumber for better readability and separation of concerns, while#[given]
and#[when]
are used for setting up preconditions and actions. Though the current approach works, using#[then]
would make the intent clearer.#[given(expr = "p2pool node {} stats is at height {int}")] #[when(expr = "p2pool node {} stats is at height {int}")] +#[then(expr = "p2pool node {} stats is at height {int}")]
59-61
: Consider using #[then] for verification stepsSimilar to the previous verification step, this would benefit from a
#[then]
annotation to clearly indicate it's a verification.#[given(expr = "p2pool node {} stats shows connected to peer {}")] #[when(expr = "p2pool node {} stats shows connected to peer {}")] +#[then(expr = "p2pool node {} stats shows connected to peer {}")]
37-46
: Consider parameterizing wait time between operationsAll steps use a hard-coded 1-second sleep. Consider making this configurable to accommodate different test environments and conditions.
You could define a constant at the module level:
pub const LOG_TARGET: &str = "cucumber::p2pool_steps"; +pub const DEFAULT_WAIT_DURATION: Duration = Duration::from_secs(1);
Then use this constant throughout your code:
- tokio::time::sleep(Duration::from_secs(1)).await; + tokio::time::sleep(DEFAULT_WAIT_DURATION).await;integration_tests/src/world.rs (5)
102-103
: Improve path sanitizationThe current path sanitization only replaces spaces with underscores, which might not handle all characters that could cause issues in file paths. Consider using a more robust path sanitization approach.
- base_dir = PathBuf::from(base_dir.to_string_lossy().replace(" ", "_")); + // Use a more comprehensive regex or dedicated path sanitization function + let sanitized_path = base_dir.to_string_lossy().replace(|c: char| !c.is_alphanumeric() && c != '/' && c != '\\', "_"); + base_dir = PathBuf::from(sanitized_path);
107-110
: Use logging framework instead of printlnThe
after
method usesprintln!
for logging shutdown messages while the rest of the code uses the logging framework. Consider using the logging framework consistently for better log management.for (name, mut p) in self.p2pool_nodes.drain(..) { - println!("Shutting down p2pool node {}", name); + info!(target: LOG_TARGET, "Shutting down p2pool node {}", name); p.kill(); }
111-115
: Use logging framework instead of printlnSimilar to the previous comment, use the logging framework for base node shutdown messages as well.
for (name, mut p) in self.base_nodes.drain(..) { - println!("Shutting down base node {}", name); + info!(target: LOG_TARGET, "Shutting down base node {}", name); // You have explicitly trigger the shutdown now because of the change to use Arc/Mutex in tari_shutdown p.kill_signal.trigger(); }
106-117
: Consider adding error handling for shutdown operationsThe
after
method doesn't handle potential errors during cleanup, which could lead to incomplete teardown and resource leaks. Consider adding error handling to ensure proper cleanup.pub async fn after(&mut self, _scenario: &Scenario) { for (name, mut p) in self.p2pool_nodes.drain(..) { - println!("Shutting down p2pool node {}", name); - p.kill(); + info!(target: LOG_TARGET, "Shutting down p2pool node {}", name); + if let Err(e) = p.kill() { + error!(target: LOG_TARGET, "Failed to shut down p2pool node {}: {}", name, e); + } } for (name, mut p) in self.base_nodes.drain(..) { - println!("Shutting down base node {}", name); + info!(target: LOG_TARGET, "Shutting down base node {}", name); // You have explicitly trigger the shutdown now because of the change to use Arc/Mutex in tari_shutdown - p.kill_signal.trigger(); + if let Err(e) = p.kill_signal.trigger() { + error!(target: LOG_TARGET, "Failed to shut down base node {}: {}", name, e); + } } }Note: The above suggestion assumes the
kill
andtrigger
methods return aResult
. If they don't, you'll need to adjust the error handling accordingly.
56-65
: Formatting inconsistency with get_base_node_clientThe formatting style in
get_p2pool_grpc_client
is different fromget_base_node_client
. For consistency, consider standardizing the formatting approach.pub async fn get_p2pool_grpc_client<S: AsRef<str> + std::fmt::Display>( &self, name: &S, ) -> anyhow::Result<ShaP2PoolClient<TonicChannel>> { - self.get_p2pool_node(name).inspect_err(|e| { - error!(target: LOG_TARGET, "p2pool node '{}' not found: {}", name, e.to_string()) - })?.get_grpc_client().await.inspect_err(|e| { - error!(target: LOG_TARGET, "Could not connect p2pool node '{}' grpc client: {}", name, e.to_string()) - }) + self.get_p2pool_node(name) + .inspect_err(|e| error!(target: LOG_TARGET, "p2pool node '{}' not found: {}", name, e.to_string()))? + .get_grpc_client() + .await + .inspect_err( + |e| error!(target: LOG_TARGET, "Could not connect p2pool node '{}' grpc client: {}", name, e.to_string()), + ) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (9)
.github/workflows/ci.yml
(2 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/Cargo.toml
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- .github/workflows/ci.yml
- integration_tests/Cargo.toml
- integration_tests/src/lib.rs
🧰 Additional context used
🧠 Learnings (1)
integration_tests/src/p2pool_process.rs (1)
Learnt from: hansieodendaal
PR: tari-project/sha-p2pool#254
File: integration_tests/tests/cucumber.rs:25-33
Timestamp: 2025-02-26T07:13:31.464Z
Learning: Using `unwrap()` in test code is acceptable in this project, as test code doesn't need the same level of error handling as production code.
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 18-18: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: test (esmeralda)
- GitHub Check: machete
- GitHub Check: cargo check with stable
🔇 Additional comments (12)
integration_tests/.husky/_/husky.sh (1)
1-32
: Well-structured Husky hook implementation.The script correctly implements all necessary components for a Husky hook system: initialization checking, debug logging, configuration sourcing, and error handling with proper exit codes.
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 18-18: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
integration_tests/tests/steps/base_node_steps.rs (1)
1-46
: Well-structured Cucumber test steps for base node operations.The implementation provides clear, consistent steps for different node initialization scenarios with appropriate error handling and logging for test diagnostics.
integration_tests/src/p2pool_process.rs (4)
33-50
: Well-structured P2PoolProcess with proper resource cleanup.The P2PoolProcess struct with its Drop implementation ensures proper cleanup of resources, preventing orphaned processes.
52-68
: Clean high-level API for P2Pool node spawning.The function provides a simple interface while delegating the complex configuration details to the implementation function.
257-294
: Robust process termination with port release verification.The kill method properly terminates the process and includes a check to ensure ports are actually released, with a reasonable sleep interval to prevent tight spinning.
436-501
: Thorough peer connection verification with proper error handling.The function provides detailed error messages and uses a configurable timeout to allow sufficient time for peers to connect, which is essential for reliable test execution.
integration_tests/src/miner.rs (3)
28-83
: Well-implemented block mining with proper error handling and timeouts.The function thoroughly handles errors, includes timeouts to prevent test hangs, and correctly reuses the wallet payment address between operations.
97-117
: Improved safety with limited mining attempts and proper error handling.The function now returns a Result instead of panicking and includes a reasonable limit on mining attempts to prevent excessive resource usage.
119-185
: Robust block height verification with detailed error reporting.The function properly checks block height from the HTTP stats API, includes timeouts, and provides detailed error messages when verification fails.
integration_tests/tests/steps/p2pool_steps.rs (1)
4-12
: Great job with the imports and organization!The imports are well-organized and correctly reference the necessary modules from the integration test framework. Grouping the cucumber, logging, test utilities, and tokio imports clearly shows the test framework's dependencies.
integration_tests/src/world.rs (2)
26-32
: Well-designed error enum with good error messagesThe
TariWorldError
enum is well-designed with clear, descriptive error messages that will help with debugging.
34-44
: Well-structured TariWorld structThe
TariWorld
struct has a good organization with appropriate fields for tracking test state. The use ofIndexMap
provides predictable iteration order which is helpful for deterministic tests.
dd178c3
to
4041cd1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (21)
integration_tests/README.md (1)
30-30
: Consider adding a comma after "files".For improved readability, add a comma between phrases:
-To run a specific file or files add `-- --input glob` to the command line +To run a specific file or files, add `-- --input glob` to the command line🧰 Tools
🪛 LanguageTool
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add-- --input glob
to the command li...(AI_HYDRA_LEO_MISSING_COMMA)
integration_tests/src/base_node_process.rs (1)
63-67
: Consider potential concurrency side effects when dropping the process.
ImplementingDrop
to invokekill
is convenient but might lead to concurrency issues ifDrop
is called unexpectedly (e.g., during a panic). Consider adding safeguards or checks to ensure that resources aren’t inadvertently left in a limbo state when the destructor runs..github/workflows/integration_tests.yml (2)
30-30
: Clearer job name consideration.
Renaming the job to “sha_p2pool” while also labeling it “Base Layer” may cause confusion. Consider aligning the job name and its descriptive text to one, consistent naming convention.
80-82
: Ensure coverage for compile-only test step.
cargo test --no-run
is effective for compilation checks, but consider adding steps or logs to confirm that all test binaries compile successfully.p2pool/src/cli/args.rs (1)
228-243
: Reuse error handling or logging across commands.
run_with_cli
explicitly calls each command’s handler but repeats the error-handling pattern. Consider a shared error/logging utility or macro to streamline and maintain consistency across commands.integration_tests/log4rs/cucumber.yml (2)
69-74
: Document the commented-out root logger configurationConsider adding a brief comment explaining why this alternative root logger configuration exists and when it might be preferred over the active one.
# We don't want prints during cucumber test, everything useful will in logs. +# This commented configuration can be used when console output is needed during debugging. # root: # level: warn # appenders: # - stdout
5-67
: Consider parameterizing log rotation settingsThe file size limits (100MB for most logs, 10MB for "other") and rotation counts are hardcoded. For greater flexibility, consider extracting these values to variables at the top of the file.
integration_tests/src/lib.rs (2)
51-70
: Consider making wait_for_service timeout configurableThe
wait_for_service
function uses a hardcoded timeout (4 * 60 attempts with 250ms sleep). Consider making this configurable for services that might need more or less time to start.-pub async fn wait_for_service(port: u16) { +pub async fn wait_for_service(port: u16, max_seconds: Option<u64>) { use tokio::net::TcpStream; - let max_tries = 4 * 60; + let max_tries = max_seconds.unwrap_or(60) * 4; // 4 attempts per second with 250ms sleep let mut attempts = 0;
22-24
: Consider adding documentation for the constantsThe constants
THIRTY_SECONDS_WITH_100_MS_SLEEP
andHUNDRED_MS
could benefit from more descriptive comments explaining their purpose and usage in tests.-pub const THIRTY_SECONDS_WITH_100_MS_SLEEP: u64 = 30 * 10; -pub const HUNDRED_MS: u64 = 100; +/// Number of iterations to wait for 30 seconds when using 100ms sleep intervals +pub const THIRTY_SECONDS_WITH_100_MS_SLEEP: u64 = 30 * 10; +/// Sleep duration in milliseconds for polling operations +pub const HUNDRED_MS: u64 = 100;integration_tests/src/p2pool_process.rs (2)
71-255
: Consider breaking down the large function into smaller componentsThe
spawn_p2pool_node_with_config
function is quite long (marked with#[allow(clippy::too_many_lines)]
). Consider extracting logical sections like configuration setup, directory creation, and process spawning into separate helper functions for better maintainability.
96-107
: Consider consistent port acquisition error handlingThe error messages for port acquisition issues are simple strings. Consider using a more structured approach for consistency with the rest of the error handling.
- node_config.p2p_port = get_port(18000..18499, Duration::from_secs(20)).ok_or("p2p_port no free port")?; - node_config.grpc_port = get_port(18500..18999, Duration::from_secs(20)).ok_or("grpc_port no free port")?; + node_config.p2p_port = get_port(18000..18499, Duration::from_secs(20)) + .ok_or_else(|| format!("Failed to find available p2p port in range 18000-18499"))?; + node_config.grpc_port = get_port(18500..18999, Duration::from_secs(20)) + .ok_or_else(|| format!("Failed to find available grpc port in range 18500-18999"))?;integration_tests/src/miner.rs (1)
107-114
: Consider increasing the maximum tries or making it configurableThe nonce finding function limits to 1 million attempts, which might be insufficient for some difficulty targets. Consider making this limit configurable based on the test requirements.
- if num_tries > 1_000_000 { + // Default to 1 million tries, but allow for higher limits based on difficulty + let max_tries = if achieved_difficulty > Difficulty::from_u64(1_000) { 10_000_000 } else { 1_000_000 }; + if num_tries > max_tries {integration_tests/tests/steps/p2pool_steps.rs (5)
15-24
: Consider making the 1-second sleep configurable or conditional.Each function imposes a 1-second delay after initializing the node. These sleeps can accumulate in larger test suites. Consider using a mechanism that checks node readiness or a configurable shorter timeout for faster feedback loops.
26-35
: Align error messages with function name.Here, the error message references "
start_p2pool_node
:" but you might wish to confirm it's consistent with the actual function name to maintain clarity in logs.
37-46
: Validate block generation success.While you log and panic on error, consider adding an explicit assertion or logging the number of blocks mined, ensuring there's clear traceability for the test steps.
48-57
: Ensure consistent step naming.Using the phrase "p2pool node {} stats is at height {int}" is clear, but ensure it remains consistent with any references in your feature files or step definitions in another file (e.g., "p2pool node {} has a block height of {int}"). Consistency aids maintainability.
59-68
: Check peer disconnection scenario.This step validates a successful connection to a peer. Consider adding a companion step to verify disconnection or failed connections, ensuring you cover negative test cases.
integration_tests/src/world.rs (4)
34-44
: Improve clarity around test environment stability.The
TariWorld
struct maintains scenario data, node references, blocks, errors, etc. Consider documenting how these fields interrelate or if certain combinations (likeseed_nodes
vs.base_nodes
) require special handling in tests. This can reduce confusion for new contributors.
58-68
: Evaluate the necessity of logging an additional error message.Each
inspect_err
logs an error and returns anErr(...)
. In busy logs, these can accumulate into near-duplicate errors. Optionally consider a single consistent error log or a more streamlined approach.
101-107
: Consider cleaning temp directories on scenario setup.You join feature and scenario names into
current_base_dir
but do not remove any potential pre-existing directories. For fully isolated tests, you might remove them prior to test runs, or keep backups if necessary.
109-119
: Enhance shutdown hooks with structured logs.Currently, the code prints to stdout when shutting down. For more robust debugging, use consistent logging at startup and shutdown of nodes. This helps correlate node start and stop events across logs.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (38)
.github/workflows/audit.yml
(1 hunks).github/workflows/ci.yml
(1 hunks).github/workflows/integration_tests.yml
(3 hunks)Cargo.toml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/Cargo.toml
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/log4rs/cucumber.yml
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/features/Sync.feature
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)log4rs_detailed.yml
(1 hunks)null
(1 hunks)p2pool/Cargo.toml
(1 hunks)p2pool/src/cli/args.rs
(4 hunks)p2pool/src/cli/commands/mod.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(6 hunks)p2pool/src/cli/mod.rs
(1 hunks)p2pool/src/lib.rs
(1 hunks)p2pool/src/main.rs
(1 hunks)p2pool/src/server/config.rs
(4 hunks)p2pool/src/server/grpc/p2pool.rs
(7 hunks)p2pool/src/server/http/stats_collector.rs
(1 hunks)p2pool/src/server/p2p/network.rs
(31 hunks)p2pool/src/server/p2p/peer_store.rs
(3 hunks)p2pool/src/server/server.rs
(1 hunks)p2pool/src/sharechain/in_memory.rs
(4 hunks)p2pool/src/sharechain/lmdb_block_storage.rs
(1 hunks)p2pool/src/sharechain/p2block.rs
(1 hunks)p2pool/src/sharechain/p2chain_level.rs
(1 hunks)src/cli/mod.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- src/cli/mod.rs
🚧 Files skipped from review as they are similar to previous changes (24)
- .github/workflows/ci.yml
- .github/workflows/audit.yml
- p2pool/src/server/p2p/peer_store.rs
- null
- p2pool/src/sharechain/lmdb_block_storage.rs
- p2pool/src/server/grpc/p2pool.rs
- p2pool/src/cli/mod.rs
- p2pool/src/server/http/stats_collector.rs
- log4rs_detailed.yml
- p2pool/src/cli/commands/util.rs
- p2pool/src/sharechain/p2chain_level.rs
- p2pool/src/cli/commands/mod.rs
- integration_tests/tests/cucumber.rs
- integration_tests/tests/steps/base_node_steps.rs
- p2pool/src/sharechain/p2block.rs
- integration_tests/tests/steps/mod.rs
- p2pool/src/server/p2p/network.rs
- integration_tests/tests/features/Sync.feature
- p2pool/src/main.rs
- p2pool/src/server/server.rs
- integration_tests/Cargo.toml
- p2pool/src/sharechain/in_memory.rs
- p2pool/src/server/config.rs
- p2pool/Cargo.toml
🧰 Additional context used
🧠 Learnings (1)
integration_tests/src/p2pool_process.rs (1)
Learnt from: hansieodendaal
PR: tari-project/sha-p2pool#254
File: integration_tests/tests/cucumber.rs:25-33
Timestamp: 2025-02-26T07:13:31.464Z
Learning: Using `unwrap()` in test code is acceptable in this project, as test code doesn't need the same level of error handling as production code.
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 18-18: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
🪛 LanguageTool
integration_tests/README.md
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add -- --input glob
to the command li...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: test (esmeralda)
- GitHub Check: machete
- GitHub Check: clippy
- GitHub Check: cargo check with stable
- GitHub Check: Cucumber tests / Base Layer
🔇 Additional comments (19)
integration_tests/.husky/_/husky.sh (1)
1-32
: Well-structured Husky hook script.The script properly implements a Git hook handler with good error handling and debugging support. It correctly separates the
readonly
andexport
operations as suggested in previous reviews.🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 18-18: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
Cargo.toml (1)
1-14
: Good workspace configuration for integration tests.The restructuring from a package to a workspace configuration properly separates the integration tests from the main package, improving project organization. The addition of overflow checks in release mode is also a good security practice.
p2pool/src/lib.rs (1)
1-14
: Clean module organization with clear public API.The module structure and public exports provide a well-organized API that will make it easier to use this library in the integration tests. The code follows Rust's module organization best practices.
integration_tests/src/base_node_process.rs (1)
83-90
: Add a timeout or error handling tospawn_base_node
.
In case the function never finds a free port or the spawned task fails, consider returning early with a user-friendly error rather than silently relying on an infinite search or indefinite block..github/workflows/integration_tests.yml (3)
23-23
: Verify Rust nightly stability.
You updated the Rust toolchain tonightly-2024-09-05
. Verify that this specific nightly is stable enough for your use cases and that any features you rely on remain supported.
83-95
: Check environment variable usage in integration tests.
The conditionif: ${{ env.CI_BINS == 'true' }}
is helpful for toggling test runs. However, confirm that both the environment setup step and the test step are in sync, ensuring the correct variables propagate.
101-101
: Artifact naming is consistent.
Renaming the artifact tojunit-cucumber
clarifies the scope. Confirm any external references (scripts, dashboards) are updated to match the new name.p2pool/src/cli/args.rs (3)
141-155
: Validate new fields for default values and constraints.
The newly introduced fields (export_libp2p_info
,network_silence_delay
,minimum_sha3_target_difficulty
,minimum_randomx_target_difficulty
) lack explicit validation or default behaviors. Consider establishing safe defaults or constraints (e.g., non-negative ranges) to prevent unintended configurations.
189-190
: AddingDebug
for CLI is a good step.
DerivingDebug
on CLI structs can simplify logging and debugging.
222-223
: Method extraction for clarity.
Extracting the command handling intorun_with_cli
is a clean approach that increases maintainability and testability.integration_tests/log4rs/cucumber.yml (1)
1-119
: Comprehensive logging configuration with well-structured appenders and loggersThe logging configuration is well-organized with appropriate appenders and log levels for different components. The rolling file policy with size limits is a good practice for managing log files in test environments.
integration_tests/src/lib.rs (2)
27-42
: Well-implemented port allocation with timeoutThe
get_port
function now includes proper timeout handling, addressing a potential issue where port selection could hang indefinitely. Good improvement!
72-94
: Good error handling in peer address retrievalThe peer address retrieval functions now use
filter_map
instead ofunwrap
, making them more robust by gracefully handling missing peers. This properly addresses previous review comments.integration_tests/src/p2pool_process.rs (2)
264-293
: Good improvement in port release checkingThe
kill
method now includes proper sleep between port checking attempts, preventing CPU-intensive spin loops. This addresses a previous review comment effectively.
436-501
: Good error handling in verify_peer_connectedThe
verify_peer_connected
function now properly returns errors instead of panicking, with clear error messages for each failure case. This makes the tests more robust and easier to debug.integration_tests/src/miner.rs (3)
28-83
: Excellent error handling and timeout managementThe
mine_and_submit_tari_blocks
function now has comprehensive error handling with detailed error messages and proper timeouts on network operations. This addresses previous review comments and significantly improves robustness.
97-117
: Good improvement in nonce finding error handlingThe
find_sha3x_header_with_achieved_difficulty
function now returns errors instead of panicking when it can't find a valid nonce, making it more robust for testing.
119-185
: Well-structured block height verification with proper error handlingThe
verify_block_height
function now includes proper error handling for all cases, including checking if the HTTP server is enabled, handling network errors, and detecting when the node exceeds the expected height.integration_tests/src/world.rs (1)
26-32
: Confirm error messages align with actual failing context.You have distinct errors for missing P2Pool processes and missing base node processes. Ensure the coverage is complete so that other potential mismatch conditions won't result in ambiguous error reporting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (7)
integration_tests/src/p2pool_process.rs (6)
114-143
: Consider simplifying the file existence check.The current pattern uses a match statement on
fs::exists
with three branches, which is a bit verbose. The third branch (_ => {}
) is redundant sincefs::exists
only returnsOk(true)
,Ok(false)
, orErr
.- match fs::exists(temp_dir_path.clone()) { - Ok(false) => { - if let Err(err) = fs::create_dir_all(&temp_dir_path) { - return Err(format!( - "Failed to create temp_dir_path at: '{}', error: {}", - temp_dir_path.display(), - err - ) - .into()); - } - if let Err(err) = fs::create_dir_all(temp_dir_path.join("configs")) { - return Err(format!( - "Failed to create configs dir at: '{}', error: {}", - temp_dir_path.join("configs").display(), - err - ) - .into()); - } - }, - Err(err) => { - return Err(format!( - "Failed to check if temp_dir_path exists at: '{}', error: {}", - temp_dir_path.display(), - err - ) - .into()); - }, - _ => {}, - } + if let Err(err) = fs::exists(temp_dir_path.clone()) { + return Err(format!( + "Failed to check if temp_dir_path exists at: '{}', error: {}", + temp_dir_path.display(), + err + ) + .into()); + } else if !fs::exists(temp_dir_path.clone()).unwrap() { + if let Err(err) = fs::create_dir_all(&temp_dir_path) { + return Err(format!( + "Failed to create temp_dir_path at: '{}', error: {}", + temp_dir_path.display(), + err + ) + .into()); + } + if let Err(err) = fs::create_dir_all(temp_dir_path.join("configs")) { + return Err(format!( + "Failed to create configs dir at: '{}', error: {}", + temp_dir_path.join("configs").display(), + err + ) + .into()); + } + }
159-163
: Remove commented out code.The commented out code appears to be alternative formatting for the base node address. Since it's not being used, it should be removed to keep the codebase clean.
let base_node_address = if let Some(base_node_process) = world.base_nodes.get(&base_node_name) { format!("http://127.0.0.1:{}", base_node_process.grpc_port) - // format!("/ip4/127.0.0.1/tcp/{}", base_node_process.grpc_port) } else if let Some(base_node_process) = world.base_nodes.get_index(0) { format!("http://127.0.0.1:{}", base_node_process.1.grpc_port) - // format!("/ip4/127.0.0.1/tcp/{}", base_node_process.1.grpc_port) } else {
217-231
: Consider extracting command execution into a reusable function.The command execution logic could be extracted into a separate function for better reusability, especially since similar patterns may be used elsewhere in the codebase.
This would make the code more maintainable and easier to test by:
- Separating command construction from execution
- Making error handling more consistent
- Creating a reusable utility for similar operations elsewhere
276-293
: Add timeout message and improve port checking loop.The port checking loop should provide feedback if it reaches the maximum attempts, and the sleep duration should be properly documented.
// Wait till the ports are cleared let mut p2p_port_closed = false; let mut grpc_port_closed = false; let mut attempts = 0; + const MAX_ATTEMPTS: u32 = 40; + const PORT_CHECK_SLEEP_MS: u64 = 250; loop { if !p2p_port_closed && TcpListener::bind(("127.0.0.1", self.config.p2p_port)).is_ok() { p2p_port_closed = true; } if !grpc_port_closed && TcpListener::bind(("127.0.0.1", self.config.grpc_port)).is_ok() { grpc_port_closed = true; } attempts += 1; - std::thread::sleep(std::time::Duration::from_millis(250)); - if attempts >= 40 { + std::thread::sleep(std::time::Duration::from_millis(PORT_CHECK_SLEEP_MS)); + if attempts >= MAX_ATTEMPTS { + debug!(target: LOG_TARGET, "Reached maximum attempts ({}) waiting for ports to be freed", MAX_ATTEMPTS); break; } }
436-501
: Improve timeout handling in verify_peer_connected.The function makes multiple attempts to verify the peer connection, but the timeout mechanism is not very clear. Consider making the timeout parameters more explicit and providing better logging of elapsed time.
pub async fn verify_peer_connected(world: &mut TariWorld, p2pool_name: String, peer_name: String) -> TestResult<()> { debug!(target: LOG_TARGET, "verify '{}' is connected to peer '{}'", p2pool_name, peer_name); let start = Instant::now(); + const MAX_ATTEMPTS: usize = THIRTY_SECONDS_WITH_100_MS_SLEEP * 10; + const SLEEP_DURATION: Duration = Duration::from_millis(HUNDRED_MS); + const TOTAL_TIMEOUT: Duration = Duration::from_millis(HUNDRED_MS * MAX_ATTEMPTS as u64); + debug!(target: LOG_TARGET, "Will attempt verification for up to {:.2?}", TOTAL_TIMEOUT); let p2pool_process = world.get_p2pool_node(&p2pool_name)?; if !p2pool_process.config.http_server.enabled { return Err(format!("p2pool node '{}' doesn't have the http server enabled", p2pool_name).into()); } // ...rest of the function... - for i in 0..(THIRTY_SECONDS_WITH_100_MS_SLEEP * 10) { + for i in 0..MAX_ATTEMPTS { // ...response handling... if i % 10 == 0 { - debug!(target: LOG_TARGET, "{}: waiting for '{}' to show peer connected", i, connections_url); + debug!( + target: LOG_TARGET, + "{}/{}: waiting for '{}' to show peer connected (elapsed: {:.2?})", + i, MAX_ATTEMPTS, connections_url, start.elapsed() + ); } - tokio::time::sleep(Duration::from_millis(HUNDRED_MS)).await; + tokio::time::sleep(SLEEP_DURATION).await; } let msg = format!("Peer '{}' is NOT connected to '{}' after {:.2?}", peer_name, p2pool_name, start.elapsed()); error!(target: LOG_TARGET, "{}", msg); Err(msg.to_string().into()) }
70-78
: Function has#[allow(clippy::too_many_lines)]
but could be refactored.The
spawn_p2pool_node_with_config
function is marked with#[allow(clippy::too_many_lines)]
, indicating it's recognized as being too long. This could be refactored into smaller, more focused functions.Consider breaking the function into smaller pieces, such as:
- A function to prepare the configuration
- A function to set up the directory structure
- A function to find and connect to base nodes
- A function to handle seed peers
- A function to start the actual process
This would improve maintainability and testability of the code.
.github/workflows/integration_tests.yml (1)
41-47
: Environment Setup and Typo Correction.
The "Envs setup" step appropriately configures several environment variables for the workflow. However, note the inline comment on line 43 contains a couple of typos ("wii" should be "will" and "cucunber.rs" should likely be "cucumber.rs"). Consider applying the following diff for clarity:- # This wii prohibit the sha_p2pool executable from being rebuilt (see 'cucunber.rs' for more info) + # This will prohibit the sha_p2pool executable from being rebuilt (see 'cucumber.rs' for more info)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/integration_tests.yml
(3 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- integration_tests/src/base_node_process.rs
🧰 Additional context used
🧠 Learnings (1)
integration_tests/src/p2pool_process.rs (1)
Learnt from: hansieodendaal
PR: tari-project/sha-p2pool#254
File: integration_tests/tests/cucumber.rs:25-33
Timestamp: 2025-02-26T07:13:31.464Z
Learning: Using `unwrap()` in test code is acceptable in this project, as test code doesn't need the same level of error handling as production code.
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: test (esmeralda)
- GitHub Check: cargo check with stable
- GitHub Check: Cucumber tests / Base Layer
- GitHub Check: clippy
🔇 Additional comments (10)
integration_tests/src/p2pool_process.rs (5)
458-469
: Good error handling in verify_peer_connected.The code correctly handles error cases and provides descriptive error messages. The early return with detailed error information is a good practice.
296-421
: Well-structured argument handling in to_args_command_line.The function systematically converts each field from the StartArgs struct into command-line arguments, handling both flags and values appropriately. This makes it easy to maintain and extend as new arguments are added.
33-44
: Well-designed P2PoolProcess struct.The struct contains all necessary information to manage a P2Pool node, including its configuration, identification, and process state. This comprehensive structure makes it easier to track and manage nodes.
452-496
:❓ Verification inconclusive
Add pagination support for connections endpoint.
The connections API might paginate results for nodes with many peers, but the current code only processes a single response. Consider checking for and handling pagination if the API supports it.
🏁 Script executed:
#!/bin/bash # Check if the connections endpoint supports or uses pagination # We'll examine the HTTP response and JSON structure # First, let's look for any API documentation find . -type f -name "*.md" -exec grep -l "connections" {} \; | xargs grep -l "API" || echo "No API documentation found mentioning connections endpoint" # Check for pagination parameters in source code rg -A 3 "connections(\"|')" --glob "*.rs" | grep -i "page\|limit\|offset" # Look for pagination-related fields in the JSON response rg -A 10 "json.*connections" --glob "*.rs" | grep -i "next\|prev\|total\|page\|limit"Length of output: 377
Review Update: Confirm Pagination Handling for Connections Endpoint
The current test only processes a single response from the connections API. Although our automated search didn’t reveal any evidence (e.g., pagination parameters like
page
,limit
, oroffset
) that the endpoint supports pagination, it’s unclear if the API may start paginating results for nodes with many peers. Please double-check the API documentation or perform manual verification to confirm if pagination is required. If the API starts paginating, this test should be updated to iterate over all pages.
- File:
integration_tests/src/p2pool_process.rs
(Lines 452-496)- Action: Verify whether the connections endpoint can return paginated results. If so, extend the logic to handle additional pages.
423-434
:❓ Verification inconclusive
The executable path retrieval could be more robust.
The current implementation assumes the executable is in
../target/release/
relative to the manifest directory, but this might not always be true depending on build configurations.
🏁 Script executed:
#!/bin/bash # Check if the sha_p2pool executable exists at the expected path if [[ -d "$(dirname "$(cargo locate-project --message-format=plain)")" ]]; then CRATE_ROOT="$(dirname "$(cargo locate-project --message-format=plain)")" EXPECTED_PATH_WIN="$CRATE_ROOT/target/release/sha_p2pool.exe" EXPECTED_PATH_UNIX="$CRATE_ROOT/target/release/sha_p2pool" if [[ -x "$EXPECTED_PATH_UNIX" || -f "$EXPECTED_PATH_WIN" ]]; then echo "✓ Executable exists at expected path" else echo "✗ Executable not found at expected path" echo "Checking for other possible locations..." # Check debug build DEBUG_PATH_WIN="$CRATE_ROOT/target/debug/sha_p2pool.exe" DEBUG_PATH_UNIX="$CRATE_ROOT/target/debug/sha_p2pool" if [[ -x "$DEBUG_PATH_UNIX" || -f "$DEBUG_PATH_WIN" ]]; then echo "Found at debug path instead" fi # Try to find it elsewhere find "$CRATE_ROOT" -name "sha_p2pool*" -type f -perm -u+x 2>/dev/null || echo "No executable found in project directory" fi else echo "Could not locate project root" fiLength of output: 923
Executable Path Robustness – Verify and Consider Fallbacks
The current implementation of
get_p2pool_exe_path()
relies on a hard-coded relative path (../target/release/...
) computed fromenv!("CARGO_MANIFEST_DIR")
. Our verification script indicates that when the environment isn’t fully set up (e.g., missing thecargo
command), the executable isn’t being found—even though in a normal development/build setup it might be present. This confirms the concern: the approach may fail under nonstandard build configurations, or if a debug build or custom target directory is used.Points to consider:
- The function assumes the executable is always under
target/release/
, which doesn’t account for cases where a debug build or an alternative target directory is used.- Consider checking for an environment variable (such as
CARGO_TARGET_DIR
) or implementing fallback logic (e.g., verifying the existence of the executable in bothtarget/release/
andtarget/debug/
).- A more dynamic resolution would help ensure the correct executable is located regardless of varying build configurations.
Suggested code snippet update (illustrative):
pub fn get_p2pool_exe_path() -> PathBuf { let crate_root = PathBuf::from(env!("CARGO_MANIFEST_DIR")); #[cfg(windows)] { let default_path = crate_root.join("../target/release/sha_p2pool.exe"); // Optionally, check for alternative paths (e.g., debug path) or use an env var override. default_path } #[cfg(not(windows))] { let default_path = crate_root.join("../target/release/sha_p2pool"); // Optionally, check for alternative paths (e.g., debug path) or use an env var override. default_path } }Note: You might enhance this function by adding runtime checks to see if the binary exists at the default path and then looking in alternative locations if it does not.
.github/workflows/integration_tests.yml (5)
22-24
: Update Rust Toolchain Version.
The environment variabletoolchain
has been updated tonightly-2024-09-05
, which appears to align with the current requirements. Please double-check that this toolchain version is compatible with all dependencies and build environments.
29-32
: Job Renaming and Consistency.
The job has been renamed tosha_p2pool
, reflecting a shift in focus. Ensure that any documentation or downstream configurations referencing the old job name are updated accordingly.🧰 Tools
🪛 actionlint (1.7.4)
32-32: label "ubuntu-high-cpu" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file
(runner-label)
80-85
: Cargo Test Compile Step Review.
The added step builds thesha_p2pool
executable and compiles tests. One point to verify is the use of the environment variable${{ env.TARGET_BINS }}
. Please ensure thatTARGET_BINS
is defined in the workflow or elsewhere in the repository; otherwise, thecargo test
command might not behave as expected.
86-99
: Integration Tests Execution Step.
The integration tests command is well-structured, incorporating options such as verbosity, a retry mechanism (--retry 2
), and concurrency (-c 5
). These settings should help mitigate intermittent issues during the test run. For future flexibility, consider making some of these parameters configurable via workflow inputs if test flakiness is observed.
100-105
: Artifact Upload Naming Consistency.
The artifact name has been updated tojunit-cucumber
, which consolidates the output naming convention. Verify that any systems or scripts that depend on the previous artifact name (junit-ffi-cucumber
) are updated accordingly to prevent downstream issues.
6854793
to
34dc06b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (6)
integration_tests/src/p2pool_process.rs (5)
158-164
: Remove commented-out code.There are commented-out lines that should be removed since they're not being used.
let base_node_address = if let Some(base_node_process) = world.base_nodes.get(&base_node_name) { format!("http://127.0.0.1:{}", base_node_process.grpc_port) - // format!("/ip4/127.0.0.1/tcp/{}", base_node_process.grpc_port) } else if let Some(base_node_process) = world.base_nodes.get_index(0) { format!("http://127.0.0.1:{}", base_node_process.1.grpc_port) - // format!("/ip4/127.0.0.1/tcp/{}", base_node_process.1.grpc_port) } else {
114-142
: Simplify directory existence check usingfs::create_dir_all
.The current implementation has complex nested error handling for directory creation.
fs::create_dir_all
already creates parent directories if they don't exist and is idempotent (safe to call on existing directories).- match fs::exists(temp_dir_path.clone()) { - Ok(false) => { - if let Err(err) = fs::create_dir_all(&temp_dir_path) { - return Err(format!( - "Failed to create temp_dir_path at: '{}', error: {}", - temp_dir_path.display(), - err - ) - .into()); - } - if let Err(err) = fs::create_dir_all(temp_dir_path.join("configs")) { - return Err(format!( - "Failed to create configs dir at: '{}', error: {}", - temp_dir_path.join("configs").display(), - err - ) - .into()); - } - }, - Err(err) => { - return Err(format!( - "Failed to check if temp_dir_path exists at: '{}', error: {}", - temp_dir_path.display(), - err - ) - .into()); - }, - _ => {}, - } + if let Err(err) = fs::create_dir_all(&temp_dir_path) { + return Err(format!( + "Failed to create temp_dir_path at: '{}', error: {}", + temp_dir_path.display(), + err + ) + .into()); + } + if let Err(err) = fs::create_dir_all(temp_dir_path.join("configs")) { + return Err(format!( + "Failed to create configs dir at: '{}', error: {}", + temp_dir_path.join("configs").display(), + err + ) + .into()); + }
268-272
: Add process details to error message.Consider including the executable path in the error message for easier debugging:
println!( - "Failed to kill p2pool node: '{}', process id: {}, '{}'", - self.name.clone(), - child.id(), - e + "Failed to kill p2pool node: '{}', process id: {}, path: '{}', error: '{}'", + self.name.clone(), + child.id(), + get_p2pool_exe_path().display(), + e );
452-496
: Consider making the timeout configurable.The function uses
THIRTY_SECONDS_WITH_100_MS_SLEEP * 10
which is a 5-minute timeout. For flexibility in testing, consider making this timeout configurable as a parameter with a default value.-pub async fn verify_peer_connected(world: &mut TariWorld, p2pool_name: String, peer_name: String) -> TestResult<()> { +pub async fn verify_peer_connected( + world: &mut TariWorld, + p2pool_name: String, + peer_name: String, + timeout_iterations: Option<usize> +) -> TestResult<()> { // ... - for i in 0..(THIRTY_SECONDS_WITH_100_MS_SLEEP * 10) { + let iterations = timeout_iterations.unwrap_or(THIRTY_SECONDS_WITH_100_MS_SLEEP * 10); + for i in 0..iterations { // ... } // ... }
96-104
: Consider using a more sophisticated port allocation strategy.Currently, port allocation happens in fixed ranges with a timeout. This works, but could be improved by:
- Caching/reusing successful port allocations between runs
- Using a more dynamic allocation strategy like OS-assigned ports (with 0)
- Implementing exponential backoff on port retry attempts
This is a good-to-have improvement for future iterations.
.github/workflows/integration_tests.yml (1)
41-47
: Environment Setup Enhancements & Typo Corrections
The environment setup step now sets several environment variables includingCI_PROFILE
,DO_NOT_REBUILD_SHA_P2POOL
, andCI_BINS
. The logic is sound, but note the following minor issues:
- There is a typo in the comment: "This wii prohibit..." should be "This will prohibit...".
- The reference
'cucunber.rs'
appears to be a typo; it likely should be'cucumber.rs'
(unless this naming is intentional).A suggested diff for clarity:
- # This wii prohibit the sha_p2pool executable from being rebuilt (see 'cucunber.rs' for more info) + # This will prohibit the sha_p2pool executable from being rebuilt (see 'cucumber.rs' for more info)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/integration_tests.yml
(3 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- integration_tests/src/base_node_process.rs
🧰 Additional context used
🧠 Learnings (1)
integration_tests/src/p2pool_process.rs (1)
Learnt from: hansieodendaal
PR: tari-project/sha-p2pool#254
File: integration_tests/tests/cucumber.rs:25-33
Timestamp: 2025-02-26T07:13:31.464Z
Learning: Using `unwrap()` in test code is acceptable in this project, as test code doesn't need the same level of error handling as production code.
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: test (esmeralda)
- GitHub Check: cargo check with stable
- GitHub Check: Cucumber tests / Base Layer
🔇 Additional comments (8)
integration_tests/src/p2pool_process.rs (3)
440-443
: LGTM!Good job implementing proper error handling with
?
instead ofunwrap
orexpect
. This is a significant improvement compared to earlier versions.
258-262
: LGTM!The gRPC client implementation is clean and efficient, with good error handling and debugging.
91-94
: LGTM!The interval settings (2 seconds) are appropriate for integration testing purposes and help ensure quick responses during tests.
.github/workflows/integration_tests.yml (5)
22-24
: Update Rust Toolchain Version
The environment variabletoolchain
has been updated tonightly-2024-09-05
. Please verify that this nightly version is intended and stable for your integration tests.
29-32
: Job Renaming Consistency
The job previously known asbase_layer
is now renamed tosha_p2pool
. Ensure that any documentation, references in scripts, or downstream workflows that refer to the old job name are updated accordingly for consistency.🧰 Tools
🪛 actionlint (1.7.4)
32-32: label "ubuntu-high-cpu" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file
(runner-label)
80-84
: Cargo Test Compile Step Verification
The added "cargo test compile" step compiles thesha_p2pool
binary and pre-builds tests, which is a solid approach to ensure the binary is ready before running cucumber tests.
- Note: The command uses
${{ env.TARGET_BINS }}
. Please confirm that this environment variable is defined (or provide a default) to avoid potential build failures if it is missing.
86-98
: Integration Test Execution Review
The integration tests are executed conditionally based on theCI_BINS
flag and include a robust set of options (e.g., test filtering with-- -t "${{ env.CI_PROFILE }} and (not @broken)"
, concurrency settings, and retries).
- Ensure that the retry logic (
--retry 2
) is aligned with your testing strategy, as it might mask intermittent failures if not monitored.- Consider adding a comment or documentation to clarify the meaning of the test filter in the future.
104-105
: Artifact Naming Update
The upload artifact step now references the artifact namejunit-cucumber
instead of the previousjunit-ffi-cucumber
. Confirm that downstream processes or integrations consuming this artifact are updated to reflect the new naming convention.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
.github/workflows/integration_tests.yml (2)
42-44
: Consider adding a comment explaining the CI_PROFILE value.The CI_PROFILE environment variable is hardcoded with a specific value. Consider adding a comment explaining why this specific profile is used and what the tags
@critical
andnot @long-running
mean for better maintainability.- echo "CI_PROFILE=@critical and (not @long-running)" >> $GITHUB_ENV + # Set CI profile to run only critical tests and exclude long-running tests + # This ensures CI runs are quick while covering essential functionality + echo "CI_PROFILE=@critical and (not @long-running)" >> $GITHUB_ENV
86-98
: Consider parameterizing test execution settings.The test execution command has hardcoded values for concurrency and retry attempts. Consider making these configurable via input parameters to provide more flexibility.
if: ${{ env.CI_BINS == 'true' }} timeout-minutes: 90 run: | cargo test \ --test cucumber \ -v \ --all-features \ --release \ --package tari_integration_tests \ -- -t "${{ env.CI_PROFILE }} and (not @broken)" \ - -c 5 \ - --retry 2 + -c ${{ inputs.concurrency || 5 }} \ + --retry ${{ inputs.retry_attempts || 2 }}integration_tests/src/p2pool_process.rs (2)
70-130
: Consider extracting configuration setup into a separate function.The configuration setup logic in
spawn_p2pool_node_with_config
is quite complex. Consider extracting it into a separate function to improve readability and maintainability.async fn spawn_p2pool_node_with_config( world: &mut TariWorld, is_seed_node: bool, name: String, squad: String, mut node_config: ShaP2PoolConfig, base_node_name: String, ) -> TestResult<()> { std::env::set_var("TARI_NETWORK", "localnet"); set_network_if_choice_valid(Network::LocalNet)?; let temp_dir_path: PathBuf; + // Initialize the configuration and directory structure + (temp_dir_path, node_config) = setup_node_config(world, &name, &squad, is_seed_node, node_config)?; - if let Some(node_ps) = world.p2pool_nodes.get(&name) { - temp_dir_path = node_ps.temp_dir_path.clone(); - node_config = node_ps.config.clone(); - } else { - node_config.p2p_service.squad_prefix = squad.clone(); - node_config.p2p_service.squad_override = Some(squad.clone()); - node_config.p2p_service.is_seed_peer = is_seed_node; - node_config.p2p_service.peer_info_publish_interval = Duration::from_secs(2); - node_config.p2p_service.peer_exchange_interval = Duration::from_secs(2); - node_config.p2p_service.meta_data_exchange_interval = Duration::from_secs(2); - node_config.network_silence_delay = 0; - // Each spawned p2pool node will use different ports - node_config.p2p_port = get_port(18000..18499, Duration::from_secs(20)).ok_or("p2p_port no free port")?; - node_config.grpc_port = get_port(18500..18999, Duration::from_secs(20)).ok_or("grpc_port no free port")?; - if is_seed_node { - node_config.http_server.enabled = false; - } else { - node_config.http_server.port = - get_port(19000..19499, Duration::from_secs(20)).ok_or("http_server_port no free port")?; - node_config.http_server.enabled = true; - } - // "/ip4/127.0.0.1/tcp/{}" or "/ip4/127.0.0.1/udp/{}/quic-v1" - node_config.p2p_service.external_addr = Some(format!("/ip4/127.0.0.1/udp/{}/quic-v1", node_config.p2p_port)); - // Create a new temporary directory - temp_dir_path = world - .current_base_dir - .as_ref() - .expect("p2pool dir on world") - .join("p2pool_nodes") - .join(format!("{}_grpc_port_{}", name.clone(), node_config.grpc_port)); - if let Err(err) = fs::create_dir_all(&temp_dir_path) { - return Err(format!( - "Failed to create temp_dir_path at: '{}', error: {}", - temp_dir_path.display(), - err - ) - .into()); - } - if let Err(err) = fs::create_dir_all(temp_dir_path.join("configs")) { - return Err(format!( - "Failed to create configs dir at: '{}', error: {}", - temp_dir_path.join("configs").display(), - err - ) - .into()); - } - };Then implement the extracted function:
/// Sets up the node configuration and directory structure fn setup_node_config( world: &mut TariWorld, name: &str, squad: &str, is_seed_node: bool, mut node_config: ShaP2PoolConfig ) -> TestResult<(PathBuf, ShaP2PoolConfig)> { let temp_dir_path: PathBuf; if let Some(node_ps) = world.p2pool_nodes.get(name) { temp_dir_path = node_ps.temp_dir_path.clone(); node_config = node_ps.config.clone(); } else { // Configure the node node_config.p2p_service.squad_prefix = squad.clone(); node_config.p2p_service.squad_override = Some(squad.clone()); node_config.p2p_service.is_seed_peer = is_seed_node; node_config.p2p_service.peer_info_publish_interval = Duration::from_secs(2); node_config.p2p_service.peer_exchange_interval = Duration::from_secs(2); node_config.p2p_service.meta_data_exchange_interval = Duration::from_secs(2); node_config.network_silence_delay = 0; // Allocate ports node_config.p2p_port = get_port(18000..18499, Duration::from_secs(20)) .ok_or("p2p_port no free port")?; node_config.grpc_port = get_port(18500..18999, Duration::from_secs(20)) .ok_or("grpc_port no free port")?; // Configure HTTP server based on node type if is_seed_node { node_config.http_server.enabled = false; } else { node_config.http_server.port = get_port(19000..19499, Duration::from_secs(20)) .ok_or("http_server_port no free port")?; node_config.http_server.enabled = true; } // Set external address node_config.p2p_service.external_addr = Some( format!("/ip4/127.0.0.1/udp/{}/quic-v1", node_config.p2p_port) ); // Create directory structure temp_dir_path = world .current_base_dir .as_ref() .expect("p2pool dir on world") .join("p2pool_nodes") .join(format!("{}_grpc_port_{}", name, node_config.grpc_port)); if let Err(err) = fs::create_dir_all(&temp_dir_path) { return Err(format!( "Failed to create temp_dir_path at: '{}', error: {}", temp_dir_path.display(), err ).into()); } if let Err(err) = fs::create_dir_all(temp_dir_path.join("configs")) { return Err(format!( "Failed to create configs dir at: '{}', error: {}", temp_dir_path.join("configs").display(), err ).into()); } } Ok((temp_dir_path, node_config)) }
409-420
: Consider using Cargo environment variables for more flexibility.The
get_p2pool_exe_path
function uses a hardcoded relative path. Consider using more Cargo environment variables to make the path resolution more flexible and reliable in different build environments.pub fn get_p2pool_exe_path() -> PathBuf { #[cfg(windows)] { - let crate_root = PathBuf::from(env!("CARGO_MANIFEST_DIR")); - crate_root.join("../target/release/sha_p2pool.exe") + // Resolve path based on cargo env vars + let target_dir = PathBuf::from(std::env::var("CARGO_TARGET_DIR").unwrap_or_else(|_| { + PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("../target").to_string_lossy().to_string() + })); + target_dir.join("release/sha_p2pool.exe") } #[cfg(not(windows))] { - let crate_root = PathBuf::from(env!("CARGO_MANIFEST_DIR")); - crate_root.join("../target/release/sha_p2pool") + // Resolve path based on cargo env vars + let target_dir = PathBuf::from(std::env::var("CARGO_TARGET_DIR").unwrap_or_else(|_| { + PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("../target").to_string_lossy().to_string() + })); + target_dir.join("release/sha_p2pool") } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/integration_tests.yml
(3 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
integration_tests/src/p2pool_process.rs (1)
Learnt from: hansieodendaal
PR: tari-project/sha-p2pool#254
File: integration_tests/tests/cucumber.rs:25-33
Timestamp: 2025-02-26T07:13:31.464Z
Learning: Using `unwrap()` in test code is acceptable in this project, as test code doesn't need the same level of error handling as production code.
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Cucumber tests / Base Layer
🔇 Additional comments (9)
.github/workflows/integration_tests.yml (2)
80-85
: LGTM: Good pre-building of dependencies.Pre-building the sha_p2pool executable before building the cucumber tests is a good practice to ensure all dependencies are available.
23-23
:❓ Verification inconclusive
Verify the nightly toolchain date.
The toolchain is set to
nightly-2024-09-05
, which appears to be a future date. Ensure this is the correct date for the toolchain you intend to use.
🏁 Script executed:
#!/bin/bash # Check if the specified Rust toolchain is available rustup toolchain list | grep "nightly-2024-09-05" # Check the latest available nightly toolchain rustup toolchain list | grep "nightly" | sort -r | head -n 1Length of output: 304
Manual Verification Needed: Confirm the Toolchain Version
The workflow file (.github/workflows/integration_tests.yml at line 23) still uses
toolchain: nightly-2024-09-05
. However, the automated check couldn't verify this due to the absence of therustup
command in the current environment. Please manually verify that the specified nightly toolchain date is correct and corresponds to an available Rust toolchain release. If this date is incorrect or not yet available, update it accordingly.integration_tests/src/p2pool_process.rs (7)
1-32
: LGTM: Well-structured module organization.The module has appropriate imports, well-defined constants, and a clear type alias for the gRPC client. The code organization follows best practices for Rust modules.
33-50
: Good use of the Drop trait for resource cleanup.Implementing the Drop trait for P2PoolProcess ensures that resources are properly cleaned up when the process is no longer needed, which is a best practice for resource management.
168-200
: Great CLI arguments construction for transparency in test execution.The approach to configuring the P2Pool node through explicit StartArgs is a good practice. It makes the test configuration transparent and easier to understand what's being tested.
242-247
: Good design: Simple and focused client connection method.The
get_grpc_client
method is well-designed, focusing solely on establishing the connection to the gRPC service without unnecessary complications.
249-279
: Add a short delay in the port-free check loop.The loop checks port availability 40 times with a 250ms sleep, but it's still quite aggressive. Consider adding a short exponential backoff to reduce unnecessary CPU usage while waiting for ports to be freed.
pub fn kill(&mut self) { if let Some(child) = &mut self.running_instance { if let Err(e) = child.kill() { println!( "Failed to kill p2pool node: '{}', process id: {}, path: '{}', error: '{}'", self.name.clone(), child.id(), get_p2pool_exe_path().display(), e ); } } // Wait till the ports are cleared let mut p2p_port_closed = false; let mut grpc_port_closed = false; let mut attempts = 0; loop { if !p2p_port_closed && TcpListener::bind(("127.0.0.1", self.config.p2p_port)).is_ok() { p2p_port_closed = true; } if !grpc_port_closed && TcpListener::bind(("127.0.0.1", self.config.grpc_port)).is_ok() { grpc_port_closed = true; } attempts += 1; - std::thread::sleep(std::time::Duration::from_millis(250)); + // Use exponential backoff with a cap to reduce CPU usage + let delay = std::cmp::min(250 * (1 << attempts.min(4)), 1000); + std::thread::sleep(std::time::Duration::from_millis(delay)); if attempts >= 40 { break; } } }
282-407
: LGTM: Comprehensive command-line arguments builder.The
to_args_command_line
function thoroughly converts all StartArgs properties to their corresponding command-line arguments, which is well-structured and maintainable.
422-487
: Proper error handling throughout the verification function.The
verify_peer_connected
function uses appropriate error handling with?
operator rather thanunwrap()
, which aligns with the project's guidance that proper error handling in test code is acceptable. This implementation is robust and maintainable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking good, a few comments that I think we need to address.
And from the ticket I think we need to add dedicated cucumber tests for:
- Nodes should sync peers
- New node should load up previously synced blocks (shutdown and restart the node)
0e0226a
to
0f3bd4f
Compare
Added cucumber tests - framework - scenarios: - New node should sync with peers and propagate blocks - Different squads should stay on their respective chains
0f3bd4f
to
6ab5afb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (16)
integration_tests/.husky/_/husky.sh (1)
18-18
: Consider adding a directive for ShellCheck.Line 18 sources a non-constant file which ShellCheck cannot analyze.
- . ~/.huskyrc + # shellcheck disable=SC1090 + . ~/.huskyrc🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 18-18: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
integration_tests/tests/steps/base_node_steps.rs (2)
19-20
: Consider removing or shortening the 1-second sleep.Since
spawn_base_node
already waits for services to come online, the extra 1-second sleep might be redundant and slow the tests. Evaluate whether it is truly needed; otherwise, removing or shortening it could reduce test runtime.
23-33
: Reduce duplication between seed and non-seed base node steps.The logic here is almost identical to the seed variant above, with the only difference being the seed node flag. Consider refactoring into a single helper function to minimize duplication.
.github/workflows/integration_tests.yml (2)
30-31
: Clarify job naming for consistency.The job is named
sha_p2pool
, but the visible label is “Cucumber tests / Base Layer.” This mismatch might be confusing when scanning the workflow logs. Consider aligning them to ensure clarity.
43-43
: Fix the likely typo "cucunber.rs".Spelling mistakes in comments or file references can lead to confusion. Update the reference to "cucumber.rs" if that is correct.
integration_tests/src/base_node_process.rs (1)
230-249
: Consider converting blocking sleeps inkill
to an async approach.While you have a 10-second cutoff (40 × 250ms), this still blocks the thread rather than yielding to other tasks. Converting to an async loop with
tokio::time::sleep
would be more idiomatic and prevent potential blocking in an async environment.integration_tests/src/lib.rs (1)
24-39
: Consider adding a short sleep interval in the port-finding loop.
Currently, the loop attempts to bind repeatedly without any pause, which can consume unnecessary CPU cycles if many attempts are needed.A possible adjustment:
loop { let port = rand::thread_rng().gen_range(min..max); if TcpListener::bind(("127.0.0.1", port)).is_ok() { return Some(port); } + std::thread::sleep(std::time::Duration::from_millis(50)); if start.elapsed() > time_out { return None; } }
integration_tests/src/miner.rs (1)
97-117
: Loop-based nonce finding is straightforward but CPU-intensive.
The code checks up to 1,000,000 attempts without a delay. Acceptable for test code, but consider a small sleep if CPU usage becomes an issue.integration_tests/src/p2pool_process.rs (1)
53-224
: Consider breaking down the node-spawning logic into smaller functions.
This function configures, spawns, and initializes the P2Pool node. A modular approach may improve readability and maintainability.integration_tests/tests/steps/p2pool_steps.rs (6)
15-24
: Consider removing the fixed 1-second delay or replacing it with a readiness check.Currently, the function waits for 1 second after spawning the p2pool seed node, which can introduce unnecessary test delays. A conditional or event-based wait might be more robust and eliminate potential flakiness.
26-35
: Reduce repetitive error-handling and waiting logic.This function repeats the same pattern of checking for errors, logging, panicking, and sleeping. Consider extracting this into a shared helper or macro, especially since multiple functions employ the same pattern.
37-46
: Promote code clarity for block addition.The function's logic is structurally sound, but combining error logging, panicking, and sleeping within the same block could reduce readability. Breaking the process into smaller steps or using a helper function could improve maintainability.
48-57
: Consider fine-grained checks for block height verification.Relying solely on a 1-second delay may not account for slower systems or network hiccups. Introducing a loop or retry mechanism for verifying the block height could enhance test stability.
70-79
: Preserve error context in shutdown logic.Using
panic!(msg)
discards some stack information. Consider returning a custom error or usinganyhow::Result
for better diagnostics. If a panic is desired, you might embed the original error’s backtrace.
81-90
: Refine restart logic and synchronization.Restarting a node and then sleeping for 1 second works, but adding a handshake or heartbeat check ensures the node is fully operational. Consider reusing shared logic or a dedicated wait condition to confirm readiness before proceeding.
p2pool/Cargo.toml (1)
1-80
: Validate pinned dependencies and git references for long-term stability.Pinning to specific commits or older versions (e.g., “tokio = 1.41.0”) ensures reproducibility but can increase maintenance overhead if upstream updates are missed. Double-check for any security advisories on pinned versions and consider referencing official release tags where possible.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (38)
.github/workflows/audit.yml
(1 hunks).github/workflows/ci.yml
(1 hunks).github/workflows/integration_tests.yml
(3 hunks).gitignore
(1 hunks)Cargo.toml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/Cargo.toml
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/log4rs/cucumber.yml
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/features/Sync.feature
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)log4rs_detailed.yml
(1 hunks)p2pool/Cargo.toml
(1 hunks)p2pool/src/cli/args.rs
(4 hunks)p2pool/src/cli/commands/mod.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(6 hunks)p2pool/src/cli/mod.rs
(1 hunks)p2pool/src/lib.rs
(1 hunks)p2pool/src/main.rs
(1 hunks)p2pool/src/server/config.rs
(4 hunks)p2pool/src/server/grpc/p2pool.rs
(7 hunks)p2pool/src/server/http/stats_collector.rs
(1 hunks)p2pool/src/server/p2p/network.rs
(30 hunks)p2pool/src/server/p2p/peer_store.rs
(3 hunks)p2pool/src/server/server.rs
(1 hunks)p2pool/src/sharechain/in_memory.rs
(4 hunks)p2pool/src/sharechain/lmdb_block_storage.rs
(1 hunks)p2pool/src/sharechain/p2block.rs
(1 hunks)p2pool/src/sharechain/p2chain_level.rs
(1 hunks)src/cli/mod.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- src/cli/mod.rs
✅ Files skipped from review due to trivial changes (1)
- .gitignore
🚧 Files skipped from review as they are similar to previous changes (23)
- p2pool/src/sharechain/p2chain_level.rs
- p2pool/src/server/http/stats_collector.rs
- .github/workflows/ci.yml
- p2pool/src/server/grpc/p2pool.rs
- .github/workflows/audit.yml
- p2pool/src/cli/mod.rs
- p2pool/src/cli/commands/mod.rs
- integration_tests/tests/cucumber.rs
- p2pool/src/sharechain/lmdb_block_storage.rs
- p2pool/src/server/p2p/peer_store.rs
- p2pool/src/main.rs
- log4rs_detailed.yml
- p2pool/src/server/server.rs
- integration_tests/Cargo.toml
- p2pool/src/cli/args.rs
- p2pool/src/sharechain/in_memory.rs
- p2pool/src/server/p2p/network.rs
- integration_tests/tests/steps/mod.rs
- p2pool/src/lib.rs
- integration_tests/src/world.rs
- p2pool/src/server/config.rs
- integration_tests/log4rs/cucumber.yml
- p2pool/src/sharechain/p2block.rs
🧰 Additional context used
🧠 Learnings (1)
integration_tests/src/p2pool_process.rs (1)
Learnt from: hansieodendaal
PR: tari-project/sha-p2pool#254
File: integration_tests/tests/cucumber.rs:25-33
Timestamp: 2025-02-26T07:13:31.464Z
Learning: Using `unwrap()` in test code is acceptable in this project, as test code doesn't need the same level of error handling as production code.
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 18-18: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
🪛 LanguageTool
integration_tests/README.md
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
🔇 Additional comments (28)
p2pool/src/cli/commands/util.rs (9)
4-16
: Comprehensive import changes look good.The imports have been expanded to include necessary modules for file operations, path handling, and timing functionalities, which support the new LibP2pInfo structure and its methods.
18-20
: Enhanced logging capability.The addition of
warn
alongsideinfo
from the log crate allows for more nuanced error reporting, which is especially beneficial for the new file operation functionalities.
55-59
: Good environment-based logging configuration.The conditional selection of logging configuration based on the
PERFORM_DETAIL_LOGGING
environment variable provides flexibility for development and debugging scenarios.
65-65
: CLI logging enhances debugging capabilities.Logging the CLI configuration at startup is valuable for diagnostics and troubleshooting.
89-100
: Well-implemented optional configurations.The conditional setting of network silence delay and minimum target difficulties (for both SHA3 and RandomX) from command-line arguments provides good flexibility for different operational scenarios.
181-181
: Fixed squad calculation with max function.The use of
max(1, config.p2p_service.num_squads)
ensures that the squad ID calculation uses at least 1 for the number of squads, preventing potential division by zero or other incorrect calculations.
185-193
: Good implementation of LibP2pInfo export with error handling.The code properly exports the LibP2pInfo to a file when requested and includes appropriate error handling to log warnings if the operation fails, rather than crashing the application.
224-259
: Well-designed LibP2pInfo structure with robust file operations.The LibP2pInfo structure is well-implemented with:
- Proper serialization/deserialization support
- A save_to_file method that creates parent directories if needed
- A read_from_file method with timeout mechanism to avoid indefinite waiting
- Good error handling for all file operations
This implementation correctly fixes the issue with non-existent
fs::exists
mentioned in past reviews by usingpath.exists()
instead.
233-234
: Correct directory existence check.The code now properly checks directory existence with
parent.exists()
instead of the non-existentfs::exists
function, and creates the directory if needed.integration_tests/README.md (2)
1-52
: Well-structured documentation with clear examples. Approved!The README provides comprehensive instructions for running cucumber tests with various options. The examples use actual test names and tag expressions that match the Sync.feature file, making it easy for developers to understand how to run specific tests.
🧰 Tools
🪛 LanguageTool
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
30-41
: Consider varying sentence beginnings for better readability.The section starting from line 30 has three consecutive sentences beginning with "To run". Consider rephrasing to improve the flow of the documentation.
- To run a specific file or files add `-- --input glob` to the command line + For executing a specific file or files, add `-- --input glob` to the command line🧰 Tools
🪛 LanguageTool
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
integration_tests/.husky/_/husky.sh (1)
1-32
: Properly structured hook script with good error handling. Approved!The script correctly implements the husky hook pattern with appropriate debug logging, configuration sourcing, and error handling.
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 18-18: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
Cargo.toml (1)
1-14
: Well-structured workspace configuration. Approved!The file correctly sets up a workspace containing p2pool and integration_tests packages, with appropriate resolver and release profile settings. The comment explaining the overflow-checks setting is particularly helpful.
integration_tests/tests/features/Sync.feature (2)
1-96
: Well-structured Cucumber scenarios for testing p2pool node synchronization. Approved!The four critical scenarios thoroughly test different aspects of node synchronization:
- Basic synchronization between nodes
- Resynchronization after going offline
- Block propagation across nodes
- Proper isolation between different squads
Each scenario follows a clear Given-When-Then structure with appropriate assertions.
18-18
:❓ Verification inconclusive
Consider increasing wait time before stopping.
A one-second wait might be too short to ensure all operations have completed properly, especially in CI environments where resources may be constrained.
🏁 Script executed:
#!/bin/bash # Check if other cucumber tests use longer wait times grep -r "I wait" integration_tests/tests/features/ | grep -v "1 seconds and stop"Length of output: 83
Action Required: Reassess the Wait Time in CI Environments
It appears that across the integration tests (including
integration_tests/tests/features/Sync.feature
at line 18) a wait time of 1 second is consistently used. Since our grep search did not reveal any tests with a longer wait time, please consider:
- Evaluating whether a 1-second pause is sufficiently robust under typical CI resource constraints.
- Testing if increasing the wait (e.g., to 2–3 seconds) improves stability and reduces flaky test behavior.
Please manually verify the impact on CI performance before making any changes.
integration_tests/tests/steps/base_node_steps.rs (1)
35-45
: Looks good as an extended step..github/workflows/integration_tests.yml (1)
80-85
: Verify usage of the${{ env.TARGET_BINS }}
variable.The script references
--release ${{ env.TARGET_BINS }}
, butTARGET_BINS
is never explicitly set or documented in the workflow. Ensure this environment variable is properly assigned where needed or remove it if it is no longer used.integration_tests/src/base_node_process.rs (1)
208-210
: Thank you for replacing the panic with a friendlier error message.Replacing immediate panics with graceful error handling helps preserve test context for diagnosing failures.
integration_tests/src/lib.rs (2)
46-67
: Panic usage confirmed acceptable in test context.
This utility function panics for a non-responsive service. In production code, an error return might be more robust, but test code can legitimately use panics to fail fast.
69-91
: Queries for peer addresses are concise and safe.
These helper functions properly avoid unwrapping the lookups and skip missing peers, which is preferable to panicking in integration tests.integration_tests/src/miner.rs (3)
28-83
: Verify usage of retrieved difficulty from the node.
Line 64 usesDifficulty::min()
instead of thetarget_difficulty
fetched at line 58. Ensure this is intentional; otherwise, pass in the actual difficulty from the node to validate correct mining difficulty.- find_sha3x_header_with_achieved_difficulty(&mut block.header, Difficulty::min())?; + find_sha3x_header_with_achieved_difficulty(&mut block.header, target_difficulty)?;
85-95
: Key-generation approach for dual Tari addresses appears suitable for test scenarios.
Since these addresses are ephemeral and used in integration tests, the provided random approach is reasonable and concise.
119-187
: Height verification logic is understandable and consistent.
The function loops up to 30 seconds, then provides a clear error if height is not achieved. This is a solid pattern for integration testing.integration_tests/src/p2pool_process.rs (4)
31-45
: Struct design is clear and includes essential node details.
Storing theChild
inrunning_instance
with a customDrop
to kill the process ensures automated cleanup.
226-232
: Simple gRPC client retrieval looks good.
It connects directly and returns a client wrapped in a result, which is sufficient for test usage.
233-263
: Graceful port shutdown is well-handled.
The loop re-checks port availability and includes a wait to avoid high CPU usage. This ensures the process is completely down before reuse in tests.
434-501
: Connection verification logic is valid.
The function checks for the peer in a JSON response, retrying for up to 6 minutes. This extended wait is acceptable for correctness in complex integration test environments.integration_tests/tests/steps/p2pool_steps.rs (1)
59-68
: Alert: potential test flakiness in peer connection checks.Similar to block height verification, waiting 1 second might be insufficient if the peer connection process sometimes takes longer. A short poll-based approach could produce more reliable test outcomes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (18)
integration_tests/README.md (3)
5-5
: Add a comma after "form" for better readability.In its simplest form, you can run the tests from the project route with...
-In its simplest form you can run the tests from the project route with `cargo test --release --test cucumber` +In its simplest form, you can run the tests from the project route with `cargo test --release --test cucumber`🧰 Tools
🪛 LanguageTool
[uncategorized] ~5-~5: Possible missing comma found.
Context: ...s ## Procedure to run In its simplest form you can run the tests from the project ...(AI_HYDRA_LEO_MISSING_COMMA)
30-30
: Add a comma after "files" for consistent punctuation.The missing comma makes the sentence harder to parse.
-To run a specific file or files add `-- --input glob` to the command line +To run a specific file or files, add `-- --input glob` to the command line🧰 Tools
🪛 LanguageTool
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add-- --input glob
to the command li...(AI_HYDRA_LEO_MISSING_COMMA)
45-47
: Add a period at the end of the paragraph for consistency.All other paragraphs in the document end with a period.
-on a "when we have time" basis. +on a "when we have time" basis.integration_tests/.husky/_/husky.sh (2)
18-18
: Consider explicitly handling sourcing errors.The script sources
~/.huskyrc
but doesn't handle errors if the sourcing fails. Consider adding error handling or making the sourcing silent.- . ~/.huskyrc + . ~/.huskyrc || echo "husky - Warning: Failed to source ~/.huskyrc"🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 18-18: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
26-29
: Use double quotes around $exitCode to handle potential spaces.While unlikely in this context, it's a good practice to quote variables in shell scripts.
- if [ $exitCode != 0 ]; then + if [ "$exitCode" != 0 ]; then echo "husky - $hook_name hook exited with code $exitCode (error)" - exit $exitCode + exit "$exitCode" fiCargo.toml (1)
8-9
: Clarify timeline for edition update.The comment mentions "until we move to edition=2021" but doesn't provide a timeline or task reference. Consider adding a TODO with a tracking issue or specific milestone.
-# Add here until we move to edition=2021 +# Use resolver="2" until we move to edition=2021 (TODO: Add tracking issue).github/workflows/integration_tests.yml (1)
23-23
: Consider pinning toolchain version in a central location.The toolchain version is hardcoded here. Consider defining it in a variable at the repository level for easier maintenance.
integration_tests/src/lib.rs (1)
48-67
: Consider parameterizing the maximum number of attempts.The function has a hardcoded maximum of 240 attempts (4*60). Consider making this configurable for flexibility in different test environments.
integration_tests/src/miner.rs (1)
119-187
: Consider parameterizing the timeout duration.The function uses a hardcoded 30-second timeout. For increased flexibility in different test environments, consider making this timeout configurable.
integration_tests/src/world.rs (3)
46-56
: Use a descriptive name in the Debug impl.
TheDebug
implementation usesf.debug_struct("Client")
instead of “TariWorld,” which can cause confusion when reading debug logs.You can rename
"Client"
to"TariWorld"
to match the struct:- f.debug_struct("Client") + f.debug_struct("TariWorld")
101-107
: Consider sanitizing more characters in scenario-based directories.
Spaces are replaced with underscores, but other filesystem-sensitive characters (like:
or\
) might still appear.If scenarios or features contain unusual characters, consider removing or encoding them to avoid filepath issues.
109-119
: Graceful shutdown confirmation.
The loop simply callskill
on each node but doesn't confirm whether the processes have fully shut down before proceeding. This might occasionally leave orphaned processes if they take longer to terminate.Consider waiting for the node processes to exit or verifying shutdown success before continuing each scenario's teardown.
integration_tests/src/p2pool_process.rs (3)
188-199
: Mixed logging approach.
This block prints to stdout withprintln!
but also uses thelog
crate for debug logs. While this is test code, consider making it consistent to standardize logs (e.g., using onlydebug!(...)
orinfo!(...)
).- println!("{}", msg); + info!(target: LOG_TARGET, "{}", msg);
248-259
: Blocking sleeps in async context.
Usingstd::thread::sleep
within an async codebase can block the entire thread.You could replace it with
tokio::time::sleep
to avoid blocking issues:-use std::thread::sleep; +use tokio::time::{sleep, Duration as TokioDuration}; ... -std::thread::sleep(std::time::Duration::from_millis(250)); +sleep(TokioDuration::from_millis(250)).await;
434-501
: Long polling duration.
The function polls for up to 360 seconds to check peer connectivity, which might be excessive in certain test configurations.Consider making the timeout configurable or shorter by default, especially if your system typically converges faster.
integration_tests/tests/steps/p2pool_steps.rs (2)
18-24
: Refactor repetitive error handling.
Several steps follow the pattern:
- Call a function returning
Result
.- If it fails, log and panic.
- Sleep for a second.
Consider extracting a helper function that takes a closure, logs errors, and panics, reducing duplication and improving readability.
59-68
: Unify connectivity checks or remove the 1-second sleep.
You do a 1-second delay after verifying peer connectivity, but each scenario step uses the same approach.If repeated sleeps are not strictly required, removing them or customizing them might save test execution time.
p2pool/Cargo.toml (1)
9-69
: Dependencies Configuration and Version Pinning
The dependencies are comprehensively listed with explicit versions and, where applicable, feature flags. This pinned configuration promotes reproducible builds. One point to consider: the dependency onhickory-resolver
uses a wildcard (version = "*"
) which can lead to unexpected breaking changes if a new version is released. It is recommended to pin an explicit version for better stability.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (38)
.github/workflows/audit.yml
(1 hunks).github/workflows/ci.yml
(1 hunks).github/workflows/integration_tests.yml
(3 hunks).gitignore
(1 hunks)Cargo.toml
(1 hunks)integration_tests/.husky/_/husky.sh
(1 hunks)integration_tests/Cargo.toml
(1 hunks)integration_tests/README.md
(1 hunks)integration_tests/log4rs/cucumber.yml
(1 hunks)integration_tests/src/base_node_process.rs
(1 hunks)integration_tests/src/lib.rs
(1 hunks)integration_tests/src/miner.rs
(1 hunks)integration_tests/src/p2pool_process.rs
(1 hunks)integration_tests/src/world.rs
(1 hunks)integration_tests/tests/cucumber.rs
(1 hunks)integration_tests/tests/features/Sync.feature
(1 hunks)integration_tests/tests/steps/base_node_steps.rs
(1 hunks)integration_tests/tests/steps/mod.rs
(1 hunks)integration_tests/tests/steps/p2pool_steps.rs
(1 hunks)log4rs_detailed.yml
(1 hunks)p2pool/Cargo.toml
(1 hunks)p2pool/src/cli/args.rs
(4 hunks)p2pool/src/cli/commands/mod.rs
(1 hunks)p2pool/src/cli/commands/util.rs
(6 hunks)p2pool/src/cli/mod.rs
(1 hunks)p2pool/src/lib.rs
(1 hunks)p2pool/src/main.rs
(1 hunks)p2pool/src/server/config.rs
(4 hunks)p2pool/src/server/grpc/p2pool.rs
(7 hunks)p2pool/src/server/http/stats_collector.rs
(1 hunks)p2pool/src/server/p2p/network.rs
(30 hunks)p2pool/src/server/p2p/peer_store.rs
(3 hunks)p2pool/src/server/server.rs
(1 hunks)p2pool/src/sharechain/in_memory.rs
(4 hunks)p2pool/src/sharechain/lmdb_block_storage.rs
(1 hunks)p2pool/src/sharechain/p2block.rs
(1 hunks)p2pool/src/sharechain/p2chain_level.rs
(1 hunks)src/cli/mod.rs
(0 hunks)
💤 Files with no reviewable changes (1)
- src/cli/mod.rs
🚧 Files skipped from review as they are similar to previous changes (25)
- .gitignore
- p2pool/src/server/server.rs
- p2pool/src/server/grpc/p2pool.rs
- p2pool/src/server/http/stats_collector.rs
- .github/workflows/audit.yml
- integration_tests/tests/cucumber.rs
- p2pool/src/sharechain/lmdb_block_storage.rs
- .github/workflows/ci.yml
- p2pool/src/sharechain/p2chain_level.rs
- p2pool/src/server/p2p/peer_store.rs
- p2pool/src/main.rs
- log4rs_detailed.yml
- integration_tests/tests/steps/mod.rs
- p2pool/src/cli/mod.rs
- p2pool/src/cli/commands/mod.rs
- p2pool/src/server/config.rs
- p2pool/src/sharechain/p2block.rs
- p2pool/src/server/p2p/network.rs
- p2pool/src/sharechain/in_memory.rs
- p2pool/src/cli/args.rs
- p2pool/src/cli/commands/util.rs
- p2pool/src/lib.rs
- integration_tests/src/base_node_process.rs
- integration_tests/Cargo.toml
- integration_tests/log4rs/cucumber.yml
🧰 Additional context used
🧠 Learnings (1)
integration_tests/src/p2pool_process.rs (1)
Learnt from: hansieodendaal
PR: tari-project/sha-p2pool#254
File: integration_tests/tests/cucumber.rs:25-33
Timestamp: 2025-02-26T07:13:31.464Z
Learning: Using `unwrap()` in test code is acceptable in this project, as test code doesn't need the same level of error handling as production code.
🪛 Shellcheck (0.10.0)
integration_tests/.husky/_/husky.sh
[warning] 18-18: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
🪛 LanguageTool
integration_tests/README.md
[uncategorized] ~5-~5: Possible missing comma found.
Context: ...s ## Procedure to run In its simplest form you can run the tests from the project ...
(AI_HYDRA_LEO_MISSING_COMMA)
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add -- --input glob
to the command li...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: test (esmeralda)
- GitHub Check: Cucumber tests / Base Layer
🔇 Additional comments (28)
integration_tests/README.md (1)
1-52
: Well-structured documentation with clear examples!The README provides comprehensive instructions for running cucumber tests with various options and includes real examples that match actual test names. The CI section clearly communicates when different test categories will run.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~5-~5: Possible missing comma found.
Context: ...s ## Procedure to run In its simplest form you can run the tests from the project ...(AI_HYDRA_LEO_MISSING_COMMA)
[style] ~30-~30: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...est cucumber -- -t "@critical" ``` - To run a specific file or files add `-- --...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[uncategorized] ~30-~30: Possible missing comma found.
Context: ...cal" ``` - To run a specific file or files add-- --input glob
to the command li...(AI_HYDRA_LEO_MISSING_COMMA)
integration_tests/.husky/_/husky.sh (1)
1-32
: Shell script correctly handles Husky hook initialization and execution.The script properly separates readonly and export operations, correctly manages the execution flow, and handles error codes appropriately.
🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 18-18: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
Cargo.toml (2)
1-6
: Appropriate workspace configuration.Converting from a package to a workspace with "p2pool" and "integration_tests" members is a good approach for organizing the integration tests.
11-14
: Good practice enabling overflow checks in release builds.Enabling overflow checks in release builds is a good safety measure that prevents silent integer overflows from causing incorrect behavior.
integration_tests/tests/features/Sync.feature (5)
8-18
: First critical scenario covers basic node sync functionality.This scenario properly verifies that a new p2pool node can connect to an existing network and sync the blockchain state correctly.
20-40
: Second critical scenario validates offline node resync capability.The scenario effectively tests that a node can rejoin the network after being offline and successfully sync the blocks it missed.
42-62
: Third critical scenario confirms bidirectional block propagation.This scenario verifies that not only can new nodes receive blocks, but they can also mine and propagate blocks to existing nodes in the network.
64-95
: Last critical scenario effectively tests squad isolation.Good test that verifies different squads maintain separate chains and don't interfere with each other's mining operations, which is critical for the p2pool architecture.
The use of "DOLPHINS" and "TURTLES" as neutral squad names is preferable to value-laden terms like "WINNERS" and "LOSERS" that were mentioned in a previous review.
18-18
:❓ Verification inconclusive
Consider extending wait times for more reliable test execution.
All scenarios wait only 1 second before stopping. This might not be enough time for all network operations to complete, especially in CI environments with varying performance characteristics.
Also applies to: 40-40, 62-62, 95-95
🏁 Script executed:
#!/bin/bash # Check if there are any network-related timeouts in logs from previous test runs rg -i "timeout|timed out|connection refused" --glob "*.log" || echo "No timeout issues found in logs"Length of output: 283
Extend wait times for more robust CI behavior
While the log search didn’t reveal any network timeout messages, note that the current static 1-second wait (at lines 18, 40, 62, and 95 in
integration_tests/tests/features/Sync.feature
) might still be too short under variable CI conditions. Even though no issues were logged, the warning from ripgrep (“No files were searched”) also suggests that the log check may not be covering all cases. It would be prudent to either increase the wait duration or consider parameterizing this timeout to accommodate environments with slower network responses.integration_tests/tests/steps/base_node_steps.rs (4)
19-19
: Fix incorrect debug message.The debug message incorrectly states "start_seed_base_node" instead of "start_base_node".
- debug!(target: LOG_TARGET, "start_seed_base_node: spawned '{}'", name); + debug!(target: LOG_TARGET, "start_seed_base_node: spawned '{}'", name);
11-21
: LGTM: Well-structured base node seed implementation.The function correctly spawns a seed base node, handles errors appropriately, and includes a delay for stabilization. The error handling with formatted messages is particularly good for debugging.
23-33
: LGTM: Clean implementation of base node spawn.The function properly spawns a non-seed base node and includes appropriate error handling and logging.
35-45
: LGTM: Good implementation for connecting to all seed nodes.The function properly retrieves all seed nodes from the world and connects the new base node to them. Error handling is consistent with the other functions.
.github/workflows/integration_tests.yml (3)
41-48
: LGTM: Simplified environment setup.The simplified environment setup with explicit profile settings and the flag to prevent rebuilding improves clarity.
90-98
: LGTM: Focused test execution with good retry logic.The integration test run is well-configured with specific tag filtering, concurrency control, and retry capability which is important for potentially flaky integration tests.
80-85
:✅ Verification successful
Verify build dependencies are correctly handled.
The workflow builds sha_p2pool before running tests, but ensure all required dependencies are built.
🏁 Script executed:
#!/bin/bash # Check if the cargo file references sha_p2pool as a dependency for tests grep -r "sha_p2pool.*= " integration_tests/ --include="*.rs" --include="Cargo.toml"Length of output: 148
Dependency Build Verification Passed
- The
sha_p2pool
dependency is referenced correctly inintegration_tests/Cargo.toml
as a local dependency ({ path = "../p2pool" }
).- The workflow step builds
sha_p2pool
before running tests, ensuring that the dependency is available for the test execution.No additional changes are needed.
integration_tests/src/lib.rs (4)
24-39
: LGTM: Improved port selection with timeout.The port selection function now includes a timeout parameter, which addresses the previous review comment about potential infinite loops.
41-44
: LGTM: Clean base directory implementation.The function generates a unique directory path that includes the process ID to avoid collisions between test runs.
69-79
: LGTM: Fixed unwrap error handling.The function now properly uses filter_map instead of unwrap, addressing the previous review comment.
81-91
: LGTM: Fixed unwrap error handling.The function now properly uses filter_map instead of unwrap, addressing the previous review comment.
integration_tests/src/miner.rs (3)
28-83
: LGTM: Well-structured mining implementation with proper error handling.The function now properly handles errors using Result types and includes appropriate timeouts for network operations. The previous issues with unwrap calls have been fixed.
97-117
: LGTM: Improved error handling in difficulty calculation.The function now returns proper errors instead of panicking and includes a reasonable limit on the number of tries.
133-179
: LGTM: Improved height verification with proper error handling.The block height verification now includes proper error handling and timeout logic, addressing previous review comments.
integration_tests/src/p2pool_process.rs (1)
79-80
: Potential port collision under concurrent spawns.
Using a fixed port range and a quick availability check can cause collisions in highly parallel environments.If concurrency is a concern, you may want to adopt ephemeral port assignment, or incorporate random backoff and re-try logic to reduce the chance of collisions. Would you like me to generate a script to search for other usage of
get_port
to confirm concurrency patterns?p2pool/Cargo.toml (4)
1-7
: Package Metadata Validation
The package metadata is well defined with all essential fields (name, authors, repository, license, version, and edition) clearly stated. This ensures that downstream users and tools can correctly identify and build the package.
71-73
: Cargo Machete Metadata Check
The[package.metadata.cargo-machete]
section correctly instructs the tool to ignorelog4rs
. This intentional exclusion appears to be a part of your dependency management strategy.
74-77
: Library Definition Verification
The library is properly declared with the correct name and source path, adhering to Rust conventions. This setup facilitates clean modularization and reuse of code within the package.
78-80
: Binary Target Configuration
The binary target is correctly configured by specifying both its name and the corresponding source file. This clear separation between library and binary targets enhances maintainability.
Description
Added cucumber tests
Closes #261
Motivation and Context
See #261
How Has This Been Tested?
New cucumber scenarios pass:
cargo test --release --test cucumber -- --name "New node should sync with peers and propagate blocks"
cargo test --release --test cucumber -- --name "Different squads should stay on their respective chains"
Existing unit tests pass
What process can a PR reviewer use to test or verify this change?
Code review
Run cucumber
Breaking Changes
Summary by CodeRabbit
New Features
Refactor / Chore
Tests & CI/CD