Skip to content

Commit 2b92ab6

Browse files
Merge branch 'main' into feat-fixed-celestiaorg#3078
2 parents eb069b1 + 9d906c7 commit 2b92ab6

22 files changed

+65
-52
lines changed

Makefile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -334,6 +334,7 @@ mptcp-disable: disable-mptcp
334334
CONFIG_FILE ?= ${HOME}/.celestia-app/config/config.toml
335335
SEND_RECV_RATE ?= 10485760 # 10 MiB
336336

337+
## configure-v3: Modifies config file in-place to conform to v3.x recommendations.
337338
configure-v3:
338339
@echo "Using config file at: $(CONFIG_FILE)"
339340
@if [ "$$(uname)" = "Darwin" ]; then \

app/default_overrides.go

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ func (stakingModule) DefaultGenesis(cdc codec.JSONCodec) json.RawMessage {
9090
})
9191
}
9292

93-
// stakingModule wraps the x/staking module in order to overwrite specific
93+
// slashingModule wraps the x/slashing module in order to overwrite specific
9494
// ModuleManager APIs.
9595
type slashingModule struct {
9696
slashing.AppModuleBasic
@@ -294,5 +294,8 @@ func DefaultAppConfig() *serverconfig.Config {
294294
cfg.StateSync.SnapshotInterval = 1500
295295
cfg.StateSync.SnapshotKeepRecent = 2
296296
cfg.MinGasPrices = fmt.Sprintf("%v%s", appconsts.DefaultMinGasPrice, BondDenom)
297+
298+
const mebibyte = 1048576
299+
cfg.GRPC.MaxRecvMsgSize = 20 * mebibyte
297300
return cfg
298301
}

app/default_overrides_test.go

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,9 @@ func TestDefaultAppConfig(t *testing.T) {
6464
assert.Equal(t, uint64(1500), cfg.StateSync.SnapshotInterval)
6565
assert.Equal(t, uint32(2), cfg.StateSync.SnapshotKeepRecent)
6666
assert.Equal(t, "0.002utia", cfg.MinGasPrices)
67+
68+
mebibyte := 1048576
69+
assert.Equal(t, 20*mebibyte, cfg.GRPC.MaxRecvMsgSize)
6770
}
6871

6972
func TestDefaultConsensusConfig(t *testing.T) {
@@ -89,6 +92,11 @@ func TestDefaultConsensusConfig(t *testing.T) {
8992
}
9093
assert.Equal(t, want, *got.Mempool)
9194
})
95+
t.Run("p2p overrides", func(t *testing.T) {
96+
const mebibyte = 1048576
97+
assert.Equal(t, int64(10*mebibyte), got.P2P.SendRate)
98+
assert.Equal(t, int64(10*mebibyte), got.P2P.RecvRate)
99+
})
92100
}
93101

94102
func Test_icaDefaultGenesis(t *testing.T) {

docs/architecture/adr-001-abci++-adoption.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -237,7 +237,7 @@ func SplitShares(txConf client.TxConfig, squareSize uint64, data *core.Data) ([]
237237
for _, rawTx := range data.Txs {
238238
... // decode the transaction
239239

240-
// write the tx to the square if it normal
240+
// write the tx to the square if it is normal
241241
if !hasWirePayForBlob(authTx) {
242242
success, err := sqwr.writeTx(rawTx)
243243
if err != nil {

docs/architecture/adr-004-qgb-relayer-security.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ In fact, the QGB smart contract is designed to update the data commitments as fo
1818
- Check if the data commitment is signed using the current valset _(this is the problematic check)_
1919
- Then, other checks + commit
2020

21-
So, if a relayer is up to date, it will submit data commitment and will pass the above checks.
21+
So, if a relayer is up-to-date, it will submit data commitment and will pass the above checks.
2222

2323
Now, if the relayer is missing some data commitments or valset updates, then it will start catching up the following way:
2424

docs/architecture/adr-009-non-interactive-default-rules-for-reduced-padding.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ The commitment is still the same but we need to use the bottom subtree roots for
6464

6565
Given a square size k, the biggest message that you can construct that is affected by the proposed non-interactive default rules has a size (k/2)². If you construct a message that is bigger than (k/2)² the `minSquareSize` will be k. If the minSquareSize is k in a square of size k then the current non-interactive default rules are equivalent to the proposed non-interactive default rules, because the message starts always at the beginning of a row. In other words, if you have k² shares in a message the worst constructible message is a quarter of that k²/4, because that is the size of the next smaller square.
6666

67-
If you choose k²/4 as the worst constructible message it would still have O(sqrt(n)) subtree roots. This is because the size of the message is k²/4 with a width of k and a length of k/4. This means the number of rows the message fills approaches O(sqrt(n)). Therefore we need to find a message where the number of rows is log(n) of the size of the message.
67+
If you choose k²/4 as the worst constructible message it would still have O(sqrt(n)) subtree roots. This is because the size of the message is k²/4 with a width of k and a length of k/4. This means the number of rows the message fills approaches O(sqrt(n)). Therefore, we need to find a message where the number of rows is log(n) of the size of the message.
6868

6969
With k being the square size and n being the number of shares and r being the number of rows, we want to find a message so that:
7070
k * r = n & log(n) = r => k = n/log(n)
@@ -179,7 +179,7 @@ Light Nodes have additional access to row and column roots from the Data Availab
179179

180180
### Total Proof Size for Partial Nodes
181181

182-
Partial nodes in this context are light clients that may download all of the data in the reserved namespace. They check that the data behind the PFB was included in the `DataRoot`, via blob inclusion proofs.
182+
Partial nodes in this context are light clients that may download all the data in the reserved namespace. They check that the data behind the PFB was included in the `DataRoot`, via blob inclusion proofs.
183183

184184
For this analysis, we take the result from the light nodes and scale them up to fill the whole square. We ignore for now the reserved namespace and what space it might occupy.
185185
For the proposed non-interactive default rules we are also creating 1 more message that could practically fit into a square. This is because the current non-interactive default rules fit one more message if we construct it this way and don't adjust the first and last messages.

docs/architecture/adr-010-remove-wire-msg-pay-for-blob.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,4 +130,4 @@ Consider an incremental approach for this and related changes:
130130
131131
## References
132132
133-
- [ADR 080: square size independent message commitments](./adr-008-square-size-independent-message-commitments.md)
133+
- [ADR 008: square size independent message commitments](./adr-008-square-size-independent-message-commitments.md)

docs/architecture/adr-011-optimistic-blob-size-independent-inclusion-proofs-and-pfb-fraud-proofs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Note: the blue nodes are additional nodes that are needed for the Merkle proofs.
7878

7979
![PFB Merkle Proof](./assets/adr011/pfd-merkle-proof.png)
8080

81-
Let's assume a square size of k. The amount of blue nodes from the shares to ROW1 is O(log(k)). The amount of blue nodes from ROW1 to the `DataRoot` is also O(log(k). You will have to include the shares themselves in the proof.
81+
Let's assume a square size of k. The amount of blue nodes from the shares to ROW1 is O(log(k)). The amount of blue nodes from ROW1 to the `DataRoot` is also O(log(k)). You will have to include the shares themselves in the proof.
8282
Share size := 512 bytes
8383
NMT-Node size := 32 bytes + 2\*8 bytes = 48 bytes
8484
MT-Node size := 32 bytes

docs/architecture/adr-018-network-upgrades.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -39,9 +39,9 @@ All upgrades (barring social hard forks) are to be rolling upgrades. That is nod
3939

4040
## Detailed Design
4141

42-
The design depends on a versioned state machine whereby the app version displayed in each block and agreed upon by all validators is the version that the transactions are both validated and executed against. If the celestia state machine is given a block at version 1 it will execute it with the v1 state machine if consensus provides a v2 block, all the transactions will be executed against the v2 state machine.
42+
The design depends on a versioned state machine whereby the app version displayed in each block and agreed upon by all validators is the version that the transactions are both validated and executed against. If the celestia state machine is given a block at version 1, it will execute it with the v1 state machine if consensus provides a v2 block, all the transactions will be executed against the v2 state machine.
4343

44-
Given this, a node can at any time spin up a v2 binary which will immediately be able to continue validating and executing v1 blocks as if it were a v1 machine.
44+
Given this, a node can at any time spin up a v2 binary, which will immediately be able to continue validating and executing v1 blocks as if it were a v1 machine.
4545

4646
### Configured Upgrade Height
4747

@@ -51,12 +51,12 @@ The height of the v1 -> v2 upgrade will initially be supplied via CLI flag (i.e.
5151
- Given the uncertainty in scheduling, the system must be able to handle changes to the upgrade height that most commonly would come in the form of delays. Embedding the upgrade schedule in the binary is convenient for node operators and avoids the possibility for user errors. However, binaries are static. If the community wished to push back the upgrade by two weeks there is the possibility that some nodes would not rerun the new binary thus we'd get a split between nodes running the old schedule and nodes running the new schedule. To overcome this, proposers will only propose a version change in the first round of each height, thus allowing transactions to still be committed even under circumstances where there is no consensus on upgrading. Secondly, we define a range in which nodes will attempt to upgrade the app version and failing this will continue to run the current version. Lastly, the binary will have the ability to manually specify the app version height mapping and override the built-in values either through a flag or in the `app.toml` config. This is expected to be used in testing and in emergency situations only. Another example to keep in mind is if a quorum outright rejects an upgrade. If some of the validators are for the change they should have some way to continue participating in the network. Therefore we employ a range that nodes will attempt to upgrade and afterwards will continue on normally with the new binary however running the older version.
5252
- The system needs to be tolerant of unexpected faults in the upgrade process. This can be:
5353
- The community/contributors realize there is a bug in the new version after the binary has been released. Node operators will need to downgrade back to the previous version and restart their node.
54-
- There is a halting bug in the migration or in processing of the first transactions. This most likely would be in the form of an apphash mismatch. This becomes more problematic with delayed execution as the block (with v2 transactions) has already been committed. Immediate execution has the advantage of the apphash mismatch being realised before the data is committed. It's still however feasible to over come this but it involves nodes rolling back the previous state and re-executing the transactions using the v1 state machine (which will skip over the v2 transactions). This means node operators should be able to manually override the app version that the proposer will propose with. Lastly, if state migrations occurred between v2 and v1, a reverse migration would need to be performed which would make things especially difficult. If we are unable to fallback to the previous version and continue then the other option is to remain halted until the bug is patched and the network can update and continue
54+
- There is a halting bug in the migration or in processing of the first transactions. This most likely would be in the form of an apphash mismatch. This becomes more problematic with delayed execution as the block (with v2 transactions) has already been committed. Immediate execution has the advantage of the apphash mismatch being realized before the data is committed. It's still however feasible to overcome this but it involves nodes rolling back the previous state and re-executing the transactions using the v1 state machine (which will skip over the v2 transactions). This means node operators should be able to manually override the app version that the proposer will propose with. Lastly, if state migrations occurred between v2 and v1, a reverse migration would need to be performed which would make things especially difficult. If we are unable to fallback to the previous version and continue then the other option is to remain halted until the bug is patched and the network can update and continue
5555
- There is a bug that is detected that could halt the chain but hasn't yet. There are other things we can develop to combat such scenarios. One thing we can do is develop a circuit breaker similar to the designs proposed in [Cosmos SDK](https://github.com/cosmos/cosmos-sdk/tree/main/x/circuit). This can disable certain message types or modules either in `CheckTx` or `ProcessProposal`. This violates the consistency property between `PrepareProposal` and `ProcessProposal` but so long as a quorum are the same, will still allow the chain to progress (inconsistency here can be interpreted as byzantine).
5656

5757
### Future Work: Signaled Upgrade Height
5858

59-
Preconfigured upgrade paths are vulnerable to halts. There is no indication that a quorum has in fact upgraded and that when the proposer proposes a block with the message to change version, that consensus will be reached. To mitigate this risk, the upgrade height can instead be signaled by validators. A version of `VoteExtension`s may be the most effective at ensuring this. Validators upon start up will automatically signal a version upgrade when they go to vote (i.e. `ExtendedVote`) so long as the latest supported version differs from the current network version. In `VerifyVoteExtension`, the version will be parsed and persisted (although not part of state). There is no verification. Upon a certain threshold which must be at least 2/3+ but could possibly be greater, the next proposer, who can support this version will propose a block with the `MsgVersionChange` that the quorum have agreed to. The rest works as before.
59+
Preconfigured upgrade paths are vulnerable to halts. There is no indication that a quorum has in fact upgraded and that when the proposer proposes a block with the message to change version, that consensus will be reached. To mitigate this risk, the upgrade height can instead be signaled by validators. A version of `VoteExtension`s may be the most effective at ensuring this. Validators upon start-up will automatically signal a version upgrade when they go to vote (i.e. `ExtendedVote`), so long as the latest supported version differs from the current network version. In `VerifyVoteExtension`, the version will be parsed and persisted (although not part of state). There is no verification. Upon a certain threshold which must be at least 2/3+ but could possibly be greater, the next proposer, who can support this version will propose a block with the `MsgVersionChange` that the quorum have agreed to. The rest works as before.
6060

6161
For better performance, `VoteExtensions` should be modified such that empty messages don't require a signature (which is currently the case for v0.38 of [CometBFT](https://github.com/cometbft/cometbft/blob/91ffbf9e45afb49d34a4af91b031e14653ee5bd8/privval/file.go#L324))
6262

docs/maintainers/docker.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,15 @@
22

33
## Context
44

5-
Github Actions should automatically build and publish a Docker image for each release. If Github Actions failed, you can manually build and publish a Docker image using this guide.
5+
GitHub Actions should automatically build and publish a Docker image for each release. If GitHub Actions failed, you can manually build and publish a Docker image using this guide.
66

77
## Prerequisites
88

99
1. Navigate to <https://github.com/settings/tokens> and generate a new token with the `write:packages` scope.
1010

1111
## Steps
1212

13-
1. Verify that a Docker image with the correct tag doesn't already exist for the release you're trying to create publish on [GHCR](https://github.com/celestiaorg/celestia-app/pkgs/container/celestia-app/versions)
13+
1. Verify that a Docker image with the correct tag doesn't already exist for the release you're trying to publish on [GHCR](https://github.com/celestiaorg/celestia-app/pkgs/container/celestia-app/versions)
1414

1515
1. In a new terminal
1616

0 commit comments

Comments
 (0)