Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v2.2: rolls out chained Merkle shreds to 100% of mainnet slots (backport of #5088) #5098

Open
wants to merge 1 commit into
base: v2.2
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 6 additions & 39 deletions turbine/src/broadcast_stage/standard_broadcast_run.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,7 @@ use {
blockstore,
shred::{shred_code, ProcessShredsStats, ReedSolomonCache, Shred, Shredder},
},
solana_sdk::{
genesis_config::ClusterType, hash::Hash, signature::Keypair, timing::AtomicInterval,
},
solana_sdk::{hash::Hash, signature::Keypair, timing::AtomicInterval},
std::{sync::RwLock, time::Duration},
tokio::sync::mpsc::Sender as AsyncSender,
};
Expand Down Expand Up @@ -76,7 +74,6 @@ impl StandardBroadcastRun {
&mut self,
keypair: &Keypair,
max_ticks_in_slot: u8,
cluster_type: ClusterType,
stats: &mut ProcessShredsStats,
) -> Vec<Shred> {
if self.completed {
Expand All @@ -91,8 +88,7 @@ impl StandardBroadcastRun {
keypair,
&[], // entries
true, // is_last_in_slot,
should_chain_merkle_shreds(self.slot, cluster_type)
.then_some(self.chained_merkle_root),
Some(self.chained_merkle_root),
self.next_shred_index,
self.next_code_index,
true, // merkle_variant
Expand All @@ -117,7 +113,6 @@ impl StandardBroadcastRun {
entries: &[Entry],
reference_tick: u8,
is_slot_end: bool,
cluster_type: ClusterType,
process_stats: &mut ProcessShredsStats,
max_data_shreds_per_slot: u32,
max_code_shreds_per_slot: u32,
Expand All @@ -135,8 +130,7 @@ impl StandardBroadcastRun {
keypair,
entries,
is_slot_end,
should_chain_merkle_shreds(self.slot, cluster_type)
.then_some(self.chained_merkle_root),
Some(self.chained_merkle_root),
self.next_shred_index,
self.next_code_index,
true, // merkle_variant
Expand Down Expand Up @@ -202,17 +196,12 @@ impl StandardBroadcastRun {
let mut process_stats = ProcessShredsStats::default();

let mut to_shreds_time = Measure::start("broadcast_to_shreds");
let cluster_type = bank.cluster_type();

if self.slot != bank.slot() {
// Finish previous slot if it was interrupted.
if !self.completed {
let shreds = self.finish_prev_slot(
keypair,
bank.ticks_per_slot() as u8,
cluster_type,
&mut process_stats,
);
let shreds =
self.finish_prev_slot(keypair, bank.ticks_per_slot() as u8, &mut process_stats);
debug_assert!(shreds.iter().all(|shred| shred.slot() == self.slot));
// Broadcast shreds for the interrupted slot.
let batch_info = Some(BroadcastShredBatchInfo {
Expand Down Expand Up @@ -279,7 +268,6 @@ impl StandardBroadcastRun {
&receive_results.entries,
reference_tick as u8,
is_last_in_slot,
cluster_type,
&mut process_stats,
blockstore::MAX_DATA_SHREDS_PER_SLOT as u32,
shred_code::MAX_CODE_SHREDS_PER_SLOT as u32,
Expand Down Expand Up @@ -507,16 +495,6 @@ impl BroadcastRun for StandardBroadcastRun {
}
}

fn should_chain_merkle_shreds(slot: Slot, cluster_type: ClusterType) -> bool {
match cluster_type {
ClusterType::Development => true,
ClusterType::Devnet => true,
// Roll out chained Merkle shreds to ~53% of mainnet slots.
ClusterType::MainnetBeta => slot % 19 < 10,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the backports, can we simply set this to true? removal fine to ride master

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the point of keeping unused code around.
In particular, keeping more discrepancy between branches would introduce more risk if more changes are backported and they hit merge conflicts or something is missed out because the branches are different.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't see the point of taking extra changes to stable

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoiding unnecessary discrepancy between branches:

In particular, keeping more discrepancy between branches would introduce more risk if more changes are backported and they hit merge conflicts or something is missed out because the branches are different.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't backport feature gate removals. this is the same class of change

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Backporting a feature gate removal will break ledger tool for slots across epoch-boundary that the feature was activated. And a feature gate removal does not need to be backported anyways. But we need to backport this change; so not sure how it makes them in the same class.

And I think keeping the backported change the same as the master branch is a better approach for the reasons mentioned earlier above. Unnecessarily causing them to digress is only asking for problems.

ClusterType::Testnet => true,
}
}

#[cfg(test)]
mod test {
use {
Expand Down Expand Up @@ -604,7 +582,6 @@ mod test {
let shreds = run.finish_prev_slot(
&keypair,
0, // max_ticks_in_slot
ClusterType::Development,
&mut ProcessShredsStats::default(),
);
assert!(run.completed);
Expand Down Expand Up @@ -854,7 +831,6 @@ mod test {
&entries[0..entries.len() - 2],
0,
false,
ClusterType::Development,
&mut stats,
1000,
1000,
Expand All @@ -864,16 +840,7 @@ mod test {
assert!(!data.is_empty());
assert!(!coding.is_empty());

let r = bs.entries_to_shreds(
&keypair,
&entries,
0,
false,
ClusterType::Development,
&mut stats,
10,
10,
);
let r = bs.entries_to_shreds(&keypair, &entries, 0, false, &mut stats, 10, 10);
info!("{:?}", r);
assert_matches!(r, Err(BroadcastError::TooManyShreds));
}
Expand Down
Loading