Skip to content
This repository was archived by the owner on Nov 15, 2023. It is now read-only.

Commit d5f651c

Browse files
author
Andronik Ordian
authored
some fixes to please cargo-spellcheck (#3550)
* some fixes to please cargo-spellcheck * some (not all) fixes for the impl guide * fix
1 parent a52dca2 commit d5f651c

24 files changed

+160
-143
lines changed

parachain/src/primitives.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,7 @@ impl IsSystem for Sibling {
271271
}
272272
}
273273

274-
/// This type can be converted into and possibly from an AccountId (which itself is generic).
274+
/// This type can be converted into and possibly from an [`AccountId`] (which itself is generic).
275275
pub trait AccountIdConversion<AccountId>: Sized {
276276
/// Convert into an account ID. This is infallible.
277277
fn into_account(&self) -> AccountId;
@@ -300,7 +300,7 @@ impl<'a> parity_scale_codec::Input for TrailingZeroInput<'a> {
300300
}
301301

302302
/// Format is b"para" ++ encode(parachain ID) ++ 00.... where 00... is indefinite trailing
303-
/// zeroes to fill AccountId.
303+
/// zeroes to fill [`AccountId`].
304304
impl<T: Encode + Decode + Default> AccountIdConversion<T> for Id {
305305
fn into_account(&self) -> T {
306306
(b"para", self)

roadmap/implementers-guide/README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# The Polkadot Parachain Host Implementers' Guide
22

3-
The implementers' guide is compiled from several source files with [mdBook](https://github.com/rust-lang/mdBook). To view it live, locally, from the repo root:
3+
The implementers' guide is compiled from several source files with [`mdBook`](https://github.com/rust-lang/mdBook).
4+
To view it live, locally, from the repo root:
45

56
```sh
67
cargo install mdbook mdbook-linkcheck mdbook-graphviz

roadmap/implementers-guide/src/SUMMARY.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -11,18 +11,18 @@
1111
- [Architecture Overview](architecture.md)
1212
- [Messaging Overview](messaging.md)
1313
- [Runtime Architecture](runtime/README.md)
14-
- [Initializer Module](runtime/initializer.md)
15-
- [Configuration Module](runtime/configuration.md)
16-
- [Shared](runtime/shared.md)
17-
- [Disputes Module](runtime/disputes.md)
18-
- [Paras Module](runtime/paras.md)
19-
- [Scheduler Module](runtime/scheduler.md)
20-
- [Inclusion Module](runtime/inclusion.md)
21-
- [ParaInherent Module](runtime/parainherent.md)
22-
- [DMP Module](runtime/dmp.md)
23-
- [UMP Module](runtime/ump.md)
24-
- [HRMP Module](runtime/hrmp.md)
25-
- [Session Info Module](runtime/session_info.md)
14+
- [`Initializer` Module](runtime/initializer.md)
15+
- [`Configuration` Module](runtime/configuration.md)
16+
- [`Shared`](runtime/shared.md)
17+
- [`Disputes` Module](runtime/disputes.md)
18+
- [`Paras` Module](runtime/paras.md)
19+
- [`Scheduler` Module](runtime/scheduler.md)
20+
- [`Inclusion` Module](runtime/inclusion.md)
21+
- [`ParaInherent` Module](runtime/parainherent.md)
22+
- [`DMP` Module](runtime/dmp.md)
23+
- [`UMP` Module](runtime/ump.md)
24+
- [`HRMP` Module](runtime/hrmp.md)
25+
- [`Session Info` Module](runtime/session_info.md)
2626
- [Runtime APIs](runtime-api/README.md)
2727
- [Validators](runtime-api/validators.md)
2828
- [Validator Groups](runtime-api/validator-groups.md)

roadmap/implementers-guide/src/disputes-flow.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ Only peers that already voted shall be queried for the dispute availability data
8282

8383
The peer to be queried for disputes data, must be picked at random.
8484

85-
A validator must retain code, persisted validation data and PoV until a block, that contains the dispute resolution, is finalized - plus an additional 24h.
85+
A validator must retain code, persisted validation data and PoV until a block, that contains the dispute resolution, is finalized - plus an additional 24 hours.
8686

8787
Dispute availability gossip must continue beyond the dispute resolution, until the post resolution timeout expired (equiv to the timeout until which additional late votes are accepted).
8888

@@ -108,7 +108,7 @@ If the count of votes pro or cons regarding the disputed block, reaches the requ
108108

109109
If a block is found invalid by a dispute resolution, it must be blacklisted to avoid resync or further build on that chain if other chains are available (to be detailed in the grandpa fork choice rule).
110110

111-
A dispute accepts Votes after the dispute is resolved, for 1d.
111+
A dispute accepts Votes after the dispute is resolved, for 1 day.
112112

113113
If a vote is received, after the dispute is resolved, the vote shall still be recorded in the state root, albeit yielding less reward.
114114

roadmap/implementers-guide/src/node/approval/approval-distribution.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ Ensure a vector is present in `pending_known` for each hash in the view that doe
131131

132132
Invoke `unify_with_peer(peer, view)` to catch them up to messages we have.
133133

134-
We also need to use the `view.finalized_number` to remove the `PeerId` from any blocks that it won't be wanting information about anymore. Note that we have to be on guard for peers doing crazy stuff like jumping their 'finalized_number` forward 10 trillion blocks to try and get us stuck in a loop for ages.
134+
We also need to use the `view.finalized_number` to remove the `PeerId` from any blocks that it won't be wanting information about anymore. Note that we have to be on guard for peers doing crazy stuff like jumping their `finalized_number` forward 10 trillion blocks to try and get us stuck in a loop for ages.
135135

136136
One of the safeguards we can implement is to reject view updates from peers where the new `finalized_number` is less than the previous.
137137

@@ -192,7 +192,7 @@ We maintain a few invariants:
192192

193193
The algorithm is the following:
194194

195-
* Load the BlockEntry using `assignment.block_hash`. If it does not exist, report the source if it is `MessageSource::Peer` and return.
195+
* Load the `BlockEntry` using `assignment.block_hash`. If it does not exist, report the source if it is `MessageSource::Peer` and return.
196196
* Compute a fingerprint for the `assignment` using `claimed_candidate_index`.
197197
* If the source is `MessageSource::Peer(sender)`:
198198
* check if `peer` appears under `known_by` and whether the fingerprint is in the knowledge of the peer. If the peer does not know the block, report for providing data out-of-view and proceed. If the peer does know the block and the `sent` knowledge contains the fingerprint, report for providing replicate data and return, otherwise, insert into the `received` knowledge and return.
@@ -218,7 +218,7 @@ The algorithm is the following:
218218

219219
Imports an approval signature referenced by block hash and candidate index:
220220

221-
* Load the BlockEntry using `approval.block_hash` and the candidate entry using `approval.candidate_entry`. If either does not exist, report the source if it is `MessageSource::Peer` and return.
221+
* Load the `BlockEntry` using `approval.block_hash` and the candidate entry using `approval.candidate_entry`. If either does not exist, report the source if it is `MessageSource::Peer` and return.
222222
* Compute a fingerprint for the approval.
223223
* Compute a fingerprint for the corresponding assignment. If the `BlockEntry`'s knowledge does not contain that fingerprint, then report the source if it is `MessageSource::Peer` and return. All references to a fingerprint after this refer to the approval's, not the assignment's.
224224
* If the source is `MessageSource::Peer(sender)`:

roadmap/implementers-guide/src/node/availability/availability-distribution.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -13,36 +13,36 @@ In particular this subsystem is responsible for:
1313
this is to ensure availability by at least 2/3+ of all validators, this
1414
happens after a candidate is backed.
1515
- Fetch `PoV` from validators, when requested via `FetchPoV` message from
16-
backing (pov_requester module).
17-
-
16+
backing (`pov_requester` module).
17+
1818
The backing subsystem is responsible of making available data available in the
1919
local `Availability Store` upon validation. This subsystem will serve any
2020
network requests by querying that store.
2121

2222
## Protocol
2323

2424
This subsystem does not handle any peer set messages, but the `pov_requester`
25-
does connecto to validators of the same backing group on the validation peer
25+
does connect to validators of the same backing group on the validation peer
2626
set, to ensure fast propagation of statements between those validators and for
2727
ensuring already established connections for requesting `PoV`s. Other than that
2828
this subsystem drives request/response protocols.
2929

3030
Input:
3131

32-
- OverseerSignal::ActiveLeaves(`[ActiveLeavesUpdate]`)
33-
- AvailabilityDistributionMessage{msg: ChunkFetchingRequest}
34-
- AvailabilityDistributionMessage{msg: PoVFetchingRequest}
35-
- AvailabilityDistributionMessage{msg: FetchPoV}
32+
- `OverseerSignal::ActiveLeaves(ActiveLeavesUpdate)`
33+
- `AvailabilityDistributionMessage{msg: ChunkFetchingRequest}`
34+
- `AvailabilityDistributionMessage{msg: PoVFetchingRequest}`
35+
- `AvailabilityDistributionMessage{msg: FetchPoV}`
3636

3737
Output:
3838

39-
- NetworkBridgeMessage::SendRequests(`[Requests]`, IfDisconnected::TryConnect)
40-
- AvailabilityStore::QueryChunk(candidate_hash, index, response_channel)
41-
- AvailabilityStore::StoreChunk(candidate_hash, chunk)
42-
- AvailabilityStore::QueryAvailableData(candidate_hash, response_channel)
43-
- RuntimeApiRequest::SessionIndexForChild
44-
- RuntimeApiRequest::SessionInfo
45-
- RuntimeApiRequest::AvailabilityCores
39+
- `NetworkBridgeMessage::SendRequests(Requests, IfDisconnected::TryConnect)`
40+
- `AvailabilityStore::QueryChunk(candidate_hash, index, response_channel)`
41+
- `AvailabilityStore::StoreChunk(candidate_hash, chunk)`
42+
- `AvailabilityStore::QueryAvailableData(candidate_hash, response_channel)`
43+
- `RuntimeApiRequest::SessionIndexForChild`
44+
- `RuntimeApiRequest::SessionInfo`
45+
- `RuntimeApiRequest::AvailabilityCores`
4646

4747
## Functionality
4848

roadmap/implementers-guide/src/node/availability/availability-recovery.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -10,14 +10,14 @@ This version of the availability recovery subsystem is based off of direct conne
1010

1111
Input:
1212

13-
- NetworkBridgeUpdateV1(update)
14-
- AvailabilityRecoveryMessage::RecoverAvailableData(candidate, session, backing_group, response)
13+
- `NetworkBridgeUpdateV1(update)`
14+
- `AvailabilityRecoveryMessage::RecoverAvailableData(candidate, session, backing_group, response)`
1515

1616
Output:
1717

18-
- NetworkBridge::SendValidationMessage
19-
- NetworkBridge::ReportPeer
20-
- AvailabilityStore::QueryChunk
18+
- `NetworkBridge::SendValidationMessage`
19+
- `NetworkBridge::ReportPeer`
20+
- `AvailabilityStore::QueryChunk`
2121

2222
## Functionality
2323

@@ -51,7 +51,7 @@ struct InteractionParams {
5151
validator_authority_keys: Vec<AuthorityId>,
5252
validators: Vec<ValidatorId>,
5353
// The number of pieces needed.
54-
threshold: usize,
54+
threshold: usize,
5555
candidate_hash: Hash,
5656
erasure_root: Hash,
5757
}
@@ -65,7 +65,7 @@ enum InteractionPhase {
6565
RequestChunks {
6666
// a random shuffling of the validators which indicates the order in which we connect to the validators and
6767
// request the chunk from them.
68-
shuffling: Vec<ValidatorIndex>,
68+
shuffling: Vec<ValidatorIndex>,
6969
received_chunks: Map<ValidatorIndex, ErasureChunk>,
7070
requesting_chunks: FuturesUnordered<Receiver<ErasureChunkRequestResponse>>,
7171
}
@@ -90,15 +90,15 @@ On `Conclude`, shut down the subsystem.
9090

9191
1. Check the `availability_lru` for the candidate and return the data if so.
9292
1. Check if there is already an interaction handle for the request. If so, add the response handle to it.
93-
1. Otherwise, load the session info for the given session under the state of `live_block_hash`, and initiate an interaction with *launch_interaction*. Add an interaction handle to the state and add the response channel to it.
93+
1. Otherwise, load the session info for the given session under the state of `live_block_hash`, and initiate an interaction with *`launch_interaction`*. Add an interaction handle to the state and add the response channel to it.
9494
1. If the session info is not available, return `RecoveryError::Unavailable` on the response channel.
9595

9696
### From-interaction logic
9797

9898
#### `FromInteraction::Concluded`
9999

100100
1. Load the entry from the `interactions` map. It should always exist, if not for logic errors. Send the result to each member of `awaiting`.
101-
1. Add the entry to the availability_lru.
101+
1. Add the entry to the `availability_lru`.
102102

103103
### Interaction logic
104104

@@ -123,12 +123,12 @@ const N_PARALLEL: usize = 50;
123123
* Request `AvailabilityStoreMessage::QueryAvailableData`. If it exists, return that.
124124
* If the phase is `InteractionPhase::RequestFromBackers`
125125
* Loop:
126-
* If the `requesting_pov` is `Some`, poll for updates on it. If it concludes, set `requesting_pov` to `None`.
126+
* If the `requesting_pov` is `Some`, poll for updates on it. If it concludes, set `requesting_pov` to `None`.
127127
* If the `requesting_pov` is `None`, take the next backer off the `shuffled_backers`.
128128
* If the backer is `Some`, issue a `NetworkBridgeMessage::Requests` with a network request for the `AvailableData` and wait for the response.
129-
* If it concludes with a `None` result, return to beginning.
130-
* If it concludes with available data, attempt a re-encoding.
131-
* If it has the correct erasure-root, break and issue a `Ok(available_data)`.
129+
* If it concludes with a `None` result, return to beginning.
130+
* If it concludes with available data, attempt a re-encoding.
131+
* If it has the correct erasure-root, break and issue a `Ok(available_data)`.
132132
* If it has an incorrect erasure-root, return to beginning.
133133
* If the backer is `None`, set the phase to `InteractionPhase::RequestChunks` with a random shuffling of validators and empty `next_shuffling`, `received_chunks`, and `requesting_chunks` and break the loop.
134134

roadmap/implementers-guide/src/node/availability/bitfield-signing.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ There is no dedicated input mechanism for bitfield signing. Instead, Bitfield Si
1010

1111
Output:
1212

13-
- BitfieldDistribution::DistributeBitfield: distribute a locally signed bitfield
14-
- AvailabilityStore::QueryChunk(CandidateHash, validator_index, response_channel)
13+
- `BitfieldDistribution::DistributeBitfield`: distribute a locally signed bitfield
14+
- `AvailabilityStore::QueryChunk(CandidateHash, validator_index, response_channel)`
1515

1616
## Functionality
1717

roadmap/implementers-guide/src/node/backing/candidate-backing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ fn spawn_validation_work(candidate, parachain head, validation function) {
114114
}
115115
```
116116

117-
### Fetch Pov Block
117+
### Fetch PoV Block
118118

119119
Create a `(sender, receiver)` pair.
120120
Dispatch a [`AvailabilityDistributionMessage`][ADM]`::FetchPoV{ validator_index, pov_hash, candidate_hash, tx, } and listen on the passed receiver for a response. Availability distribution will send the request to the validator specified by `validator_index`, which might not be serving it for whatever reasons, therefore we need to retry with other backing validators in that case.

roadmap/implementers-guide/src/node/backing/statement-distribution.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,20 +8,20 @@ The Statement Distribution Subsystem is responsible for distributing statements
88

99
Input:
1010

11-
- NetworkBridgeUpdate(update)
12-
- StatementDistributionMessage
11+
- `NetworkBridgeUpdate(update)`
12+
- `StatementDistributionMessage`
1313

1414
Output:
1515

16-
- NetworkBridge::SendMessage(`[PeerId]`, message)
17-
- NetworkBridge::SendRequests (StatementFetching)
18-
- NetworkBridge::ReportPeer(PeerId, cost_or_benefit)
16+
- `NetworkBridge::SendMessage(PeerId, message)`
17+
- `NetworkBridge::SendRequests(StatementFetching)`
18+
- `NetworkBridge::ReportPeer(PeerId, cost_or_benefit)`
1919

2020
## Functionality
2121

2222
Implemented as a gossip protocol. Handle updates to our view and peers' views. Neighbor packets are used to inform peers which chain heads we are interested in data for.
2323

24-
It is responsible for distributing signed statements that we have generated and forwarding them, and for detecting a variety of Validator misbehaviors for reporting to [Misbehavior Arbitration](../utility/misbehavior-arbitration.md). During the Backing stage of the inclusion pipeline, it's the main point of contact with peer nodes. On receiving a signed statement from a peer in the same backing group, assuming the peer receipt state machine is in an appropriate state, it sends the Candidate Receipt to the [Candidate Backing subsystem](candidate-backing.md) to handle the validator's statement. On receiving `StatementDistributionMessage::Share` we make sure to send messages to our backing group in addition to random other peers, to ensure a fast backing process and getting all statements quickly for distribtution.
24+
It is responsible for distributing signed statements that we have generated and forwarding them, and for detecting a variety of Validator misbehaviors for reporting to [Misbehavior Arbitration](../utility/misbehavior-arbitration.md). During the Backing stage of the inclusion pipeline, it's the main point of contact with peer nodes. On receiving a signed statement from a peer in the same backing group, assuming the peer receipt state machine is in an appropriate state, it sends the Candidate Receipt to the [Candidate Backing subsystem](candidate-backing.md) to handle the validator's statement. On receiving `StatementDistributionMessage::Share` we make sure to send messages to our backing group in addition to random other peers, to ensure a fast backing process and getting all statements quickly for distribution.
2525

2626
Track equivocating validators and stop accepting information from them. Establish a data-dependency order:
2727

@@ -71,7 +71,7 @@ The simple approach is to say that we only receive up to two `Seconded` statemen
7171

7272
With that in mind, this simple approach has a caveat worth digging deeper into.
7373

74-
First: We may be aware of two equivocated `Seconded` statements issued by a validator. A totally honest peer of ours can also be aware of one or two different `Seconded` statements issued by the same validator. And yet another peer may be aware of one or two _more_ `Seconded` statements. And so on. This interacts badly with pre-emptive sending logic. Upon sending a `Seconded` statement to a peer, we will want to pre-emptively follow up with all statements relative to that candidate. Waiting for acknowledgement introduces latency at every hop, so that is best avoided. What can happen is that upon receipt of the `Seconded` statement, the peer will discard it as it falls beyond the bound of 2 that it is allowed to store. It cannot store anything in memory about discarded candidates as that would introduce a DoS vector. Then, the peer would receive from us all of the statements pertaining to that candidate, which, from its perspective, would be undesired - they are data-dependent on the `Seconded` statement we sent them, but they have erased all record of that from their memory. Upon receiving a potential flood of undesired statements, this 100% honest peer may choose to disconnect from us. In this way, an adversary may be able to partition the network with careful distribution of equivocated `Seconded` statements.
74+
First: We may be aware of two equivocated `Seconded` statements issued by a validator. A totally honest peer of ours can also be aware of one or two different `Seconded` statements issued by the same validator. And yet another peer may be aware of one or two _more_ `Seconded` statements. And so on. This interacts badly with pre-emptive sending logic. Upon sending a `Seconded` statement to a peer, we will want to pre-emptively follow up with all statements relative to that candidate. Waiting for acknowledgment introduces latency at every hop, so that is best avoided. What can happen is that upon receipt of the `Seconded` statement, the peer will discard it as it falls beyond the bound of 2 that it is allowed to store. It cannot store anything in memory about discarded candidates as that would introduce a DoS vector. Then, the peer would receive from us all of the statements pertaining to that candidate, which, from its perspective, would be undesired - they are data-dependent on the `Seconded` statement we sent them, but they have erased all record of that from their memory. Upon receiving a potential flood of undesired statements, this 100% honest peer may choose to disconnect from us. In this way, an adversary may be able to partition the network with careful distribution of equivocated `Seconded` statements.
7575

7676
The fix is to track, per-peer, the hashes of up to 4 candidates per validator (per relay-parent) that the peer is aware of. It is 4 because we may send them 2 and they may send us 2 different ones. We track the data that they are aware of as the union of things we have sent them and things they have sent us. If we receive a 1st or 2nd `Seconded` statement from a peer, we note it in the peer's known candidates even if we do disregard the data locally. And then, upon receipt of any data dependent on that statement, we do not reduce that peer's standing in our eyes, as the data was not undesired.
7777

0 commit comments

Comments
 (0)