You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/architecture/adr-009-non-interactive-default-rules-for-reduced-padding.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -64,7 +64,7 @@ The commitment is still the same but we need to use the bottom subtree roots for
64
64
65
65
Given a square size k, the biggest message that you can construct that is affected by the proposed non-interactive default rules has a size (k/2)². If you construct a message that is bigger than (k/2)² the `minSquareSize` will be k. If the minSquareSize is k in a square of size k then the current non-interactive default rules are equivalent to the proposed non-interactive default rules, because the message starts always at the beginning of a row. In other words, if you have k² shares in a message the worst constructible message is a quarter of that k²/4, because that is the size of the next smaller square.
66
66
67
-
If you choose k²/4 as the worst constructible message it would still have O(sqrt(n)) subtree roots. This is because the size of the message is k²/4 with a width of k and a length of k/4. This means the number of rows the message fills approaches O(sqrt(n)). Therefore we need to find a message where the number of rows is log(n) of the size of the message.
67
+
If you choose k²/4 as the worst constructible message it would still have O(sqrt(n)) subtree roots. This is because the size of the message is k²/4 with a width of k and a length of k/4. This means the number of rows the message fills approaches O(sqrt(n)). Therefore, we need to find a message where the number of rows is log(n) of the size of the message.
68
68
69
69
With k being the square size and n being the number of shares and r being the number of rows, we want to find a message so that:
70
70
k * r = n & log(n) = r => k = n/log(n)
@@ -179,7 +179,7 @@ Light Nodes have additional access to row and column roots from the Data Availab
179
179
180
180
### Total Proof Size for Partial Nodes
181
181
182
-
Partial nodes in this context are light clients that may download all of the data in the reserved namespace. They check that the data behind the PFB was included in the `DataRoot`, via blob inclusion proofs.
182
+
Partial nodes in this context are light clients that may download all the data in the reserved namespace. They check that the data behind the PFB was included in the `DataRoot`, via blob inclusion proofs.
183
183
184
184
For this analysis, we take the result from the light nodes and scale them up to fill the whole square. We ignore for now the reserved namespace and what space it might occupy.
185
185
For the proposed non-interactive default rules we are also creating 1 more message that could practically fit into a square. This is because the current non-interactive default rules fit one more message if we construct it this way and don't adjust the first and last messages.
Let's assume a square size of k. The amount of blue nodes from the shares to ROW1 is O(log(k)). The amount of blue nodes from ROW1 to the `DataRoot` is also O(log(k). You will have to include the shares themselves in the proof.
81
+
Let's assume a square size of k. The amount of blue nodes from the shares to ROW1 is O(log(k)). The amount of blue nodes from ROW1 to the `DataRoot` is also O(log(k)). You will have to include the shares themselves in the proof.
Copy file name to clipboardExpand all lines: docs/architecture/adr-018-network-upgrades.md
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -39,9 +39,9 @@ All upgrades (barring social hard forks) are to be rolling upgrades. That is nod
39
39
40
40
## Detailed Design
41
41
42
-
The design depends on a versioned state machine whereby the app version displayed in each block and agreed upon by all validators is the version that the transactions are both validated and executed against. If the celestia state machine is given a block at version 1 it will execute it with the v1 state machine if consensus provides a v2 block, all the transactions will be executed against the v2 state machine.
42
+
The design depends on a versioned state machine whereby the app version displayed in each block and agreed upon by all validators is the version that the transactions are both validated and executed against. If the celestia state machine is given a block at version 1, it will execute it with the v1 state machine if consensus provides a v2 block, all the transactions will be executed against the v2 state machine.
43
43
44
-
Given this, a node can at any time spin up a v2 binary which will immediately be able to continue validating and executing v1 blocks as if it were a v1 machine.
44
+
Given this, a node can at any time spin up a v2 binary, which will immediately be able to continue validating and executing v1 blocks as if it were a v1 machine.
45
45
46
46
### Configured Upgrade Height
47
47
@@ -51,12 +51,12 @@ The height of the v1 -> v2 upgrade will initially be supplied via CLI flag (i.e.
51
51
- Given the uncertainty in scheduling, the system must be able to handle changes to the upgrade height that most commonly would come in the form of delays. Embedding the upgrade schedule in the binary is convenient for node operators and avoids the possibility for user errors. However, binaries are static. If the community wished to push back the upgrade by two weeks there is the possibility that some nodes would not rerun the new binary thus we'd get a split between nodes running the old schedule and nodes running the new schedule. To overcome this, proposers will only propose a version change in the first round of each height, thus allowing transactions to still be committed even under circumstances where there is no consensus on upgrading. Secondly, we define a range in which nodes will attempt to upgrade the app version and failing this will continue to run the current version. Lastly, the binary will have the ability to manually specify the app version height mapping and override the built-in values either through a flag or in the `app.toml` config. This is expected to be used in testing and in emergency situations only. Another example to keep in mind is if a quorum outright rejects an upgrade. If some of the validators are for the change they should have some way to continue participating in the network. Therefore we employ a range that nodes will attempt to upgrade and afterwards will continue on normally with the new binary however running the older version.
52
52
- The system needs to be tolerant of unexpected faults in the upgrade process. This can be:
53
53
- The community/contributors realize there is a bug in the new version after the binary has been released. Node operators will need to downgrade back to the previous version and restart their node.
54
-
- There is a halting bug in the migration or in processing of the first transactions. This most likely would be in the form of an apphash mismatch. This becomes more problematic with delayed execution as the block (with v2 transactions) has already been committed. Immediate execution has the advantage of the apphash mismatch being realised before the data is committed. It's still however feasible to over come this but it involves nodes rolling back the previous state and re-executing the transactions using the v1 state machine (which will skip over the v2 transactions). This means node operators should be able to manually override the app version that the proposer will propose with. Lastly, if state migrations occurred between v2 and v1, a reverse migration would need to be performed which would make things especially difficult. If we are unable to fallback to the previous version and continue then the other option is to remain halted until the bug is patched and the network can update and continue
54
+
- There is a halting bug in the migration or in processing of the first transactions. This most likely would be in the form of an apphash mismatch. This becomes more problematic with delayed execution as the block (with v2 transactions) has already been committed. Immediate execution has the advantage of the apphash mismatch being realized before the data is committed. It's still however feasible to overcome this but it involves nodes rolling back the previous state and re-executing the transactions using the v1 state machine (which will skip over the v2 transactions). This means node operators should be able to manually override the app version that the proposer will propose with. Lastly, if state migrations occurred between v2 and v1, a reverse migration would need to be performed which would make things especially difficult. If we are unable to fallback to the previous version and continue then the other option is to remain halted until the bug is patched and the network can update and continue
55
55
- There is a bug that is detected that could halt the chain but hasn't yet. There are other things we can develop to combat such scenarios. One thing we can do is develop a circuit breaker similar to the designs proposed in [Cosmos SDK](https://github.com/cosmos/cosmos-sdk/tree/main/x/circuit). This can disable certain message types or modules either in `CheckTx` or `ProcessProposal`. This violates the consistency property between `PrepareProposal` and `ProcessProposal` but so long as a quorum are the same, will still allow the chain to progress (inconsistency here can be interpreted as byzantine).
56
56
57
57
### Future Work: Signaled Upgrade Height
58
58
59
-
Preconfigured upgrade paths are vulnerable to halts. There is no indication that a quorum has in fact upgraded and that when the proposer proposes a block with the message to change version, that consensus will be reached. To mitigate this risk, the upgrade height can instead be signaled by validators. A version of `VoteExtension`s may be the most effective at ensuring this. Validators upon startup will automatically signal a version upgrade when they go to vote (i.e. `ExtendedVote`) so long as the latest supported version differs from the current network version. In `VerifyVoteExtension`, the version will be parsed and persisted (although not part of state). There is no verification. Upon a certain threshold which must be at least 2/3+ but could possibly be greater, the next proposer, who can support this version will propose a block with the `MsgVersionChange` that the quorum have agreed to. The rest works as before.
59
+
Preconfigured upgrade paths are vulnerable to halts. There is no indication that a quorum has in fact upgraded and that when the proposer proposes a block with the message to change version, that consensus will be reached. To mitigate this risk, the upgrade height can instead be signaled by validators. A version of `VoteExtension`s may be the most effective at ensuring this. Validators upon start-up will automatically signal a version upgrade when they go to vote (i.e. `ExtendedVote`), so long as the latest supported version differs from the current network version. In `VerifyVoteExtension`, the version will be parsed and persisted (although not part of state). There is no verification. Upon a certain threshold which must be at least 2/3+ but could possibly be greater, the next proposer, who can support this version will propose a block with the `MsgVersionChange` that the quorum have agreed to. The rest works as before.
60
60
61
61
For better performance, `VoteExtensions` should be modified such that empty messages don't require a signature (which is currently the case for v0.38 of [CometBFT](https://github.com/cometbft/cometbft/blob/91ffbf9e45afb49d34a4af91b031e14653ee5bd8/privval/file.go#L324))
Copy file name to clipboardExpand all lines: docs/maintainers/docker.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -2,15 +2,15 @@
2
2
3
3
## Context
4
4
5
-
Github Actions should automatically build and publish a Docker image for each release. If Github Actions failed, you can manually build and publish a Docker image using this guide.
5
+
GitHub Actions should automatically build and publish a Docker image for each release. If GitHub Actions failed, you can manually build and publish a Docker image using this guide.
6
6
7
7
## Prerequisites
8
8
9
9
1. Navigate to <https://github.com/settings/tokens> and generate a new token with the `write:packages` scope.
10
10
11
11
## Steps
12
12
13
-
1. Verify that a Docker image with the correct tag doesn't already exist for the release you're trying to create publish on [GHCR](https://github.com/celestiaorg/celestia-app/pkgs/container/celestia-app/versions)
13
+
1. Verify that a Docker image with the correct tag doesn't already exist for the release you're trying to publish on [GHCR](https://github.com/celestiaorg/celestia-app/pkgs/container/celestia-app/versions)
0 commit comments