Skip to content

Commit ad7d2ac

Browse files
jjeangalTristan-Wilsoneljobeganeshvanahallitsahee
authored
Sync Nitro v3.5.6 (#577)
* Auction resolution latency metric * Timeboost: swap sequencers seamlessly * Upgrade actions/upload-artifact in CI * prioritize reading from timeboostAuctionResolutionTxQueue * chore: fix some function names in comment Signed-off-by: linchizhen <[email protected]> * Add the installation of the cbindgen binary This binary is used in the make invocation, so it needs to be installed. * add system test to test seamless swap of active sequencer * fix lint errors * Add cbindgen to both workflows It turns out that the ci.yml workflow was also failing because of the removal of cbindgen from the Ubuntu 24.04 image used by the GitHub action runners. This change also moves the installation of cbindgen earlier in the codequl-analysis.yml file and expands the scope of what is cached after a rust installation to include the cbindgen binary (which is installed in ~/.cargo/bin) * Bump golang.org/x/net from 0.26.0 to 0.33.0 Bumps [golang.org/x/net](https://github.com/golang/net) from 0.26.0 to 0.33.0. - [Commits](golang/net@v0.26.0...v0.33.0) --- updated-dependencies: - dependency-name: golang.org/x/net dependency-type: indirect ... Signed-off-by: dependabot[bot] <[email protected]> * Implement CodeAt for contractAdapter CodeAt is needed if the contract call returns a zero length response. The bind library code uses CodeAt to check if there is any contract there at all. * make redis related updates more async * Move timeboost init back to mainImpl It doesn't work where it was before, the contractAdapter is unable to read the express lane contract. * address PR comments * Move express lane start to after init * Make auctioneer rpc namespace work on non jwt * make best effort to sync from redis during a swap * disable control transfer * update bold * geth-pin update: performance and metrics improvements * maintenance api * reduce roundInfo lock contention in syncFromRedis * address PR comments * Flushed TrieDB during maintenance * Forward auction resolution txs * Moves maintenance trie cap limit config to execution config * Fixes units related to trie cap limit config * Apply suggestions from code review Co-authored-by: Joshua Colvin <[email protected]> * Fixes mutex in FlushTrieDB * maintenance: improve api and address reviews * address PR comments * address PR comments * fix lint error * fix failing test after merge from upstream * update pin * fix for flatcalltracer originally just on v3.3.x branch fixes NIT-3071 Pulls in OffchainLabs/go-ethereum#401 * comment fixes by PR reviewe * fix TestPrestateTracingSimple * further fix TestPrestateTracingSimple * Add two new fields that were added to the rollup config * Extra logging for when forwarding fails * attempt to reduce flakiness of bold virtual block tests * Option to use redis coordinator to find sequencer This PR adds two new options for the autonomous-auctioneer which allow it to discover the active sequencer so that it can send its auction resolution transactions directly to it. If the sequencer is using the redis coordinator to manage which sequencer is active, then the same redis url can be passed as an option to the auctioneer. The new options are: --auctioneer-server.redis-coordinator-url --auctioneer-server.use-redis-coordinator * revert * revert changes to nitro-testnode * allow sequencer to not use block metadata * do not add zero metadata or track it when timeboost is not enabled * Use the correct error When the state provider isn't able to do its job because the chain is lagging behind head, it's supposed to use a predefined error from the l2stateprovider package. Instead, the nitro implemention of the provider was using a redefinition of the error. While this coding error easily accounts for why there were a bunch of ERROR logs about the chain's not keeping up with the latest changes, it does not explain why the test is sometimes flaky. I suspect that the original commits in this PR were chasing the ERROR logs in the flaky tests, but that those were not the root cause of the test failures. * bulk syncing of missing block metadata should start from the block number provided in trackBlockMetadataFrom flag, this prevents querying of stale missing data * Take out JWT from timeboost system tests * Add small sleep in BoLD when chain is behind Pulls in https://github.com/OffchainLabs/bold/pull/726/files * avoids leaking URL in fallback client * Fixes import lint issue * Update bold pin * TransactionStreamer: dont fetch block metadata when unnecessary avoid trying to fetch block metadata when nnecessary * fix failing tests * fix test * Revert "Update bold pin" This reverts commit 0800d46. * chainInfo supports track-block-metadata-from * Timeboost: Don't store or publish to feed, blockMetadata of blocks lower than TrackBlockMetadataFrom config option * allow sequencer to collect metadata without timeboost * remove accidental nitro-testnode pin update * add blockinfo to sepolia metadata * Fix reading of pending ExpressLane messages from redis * decouple scanning of keys from fetching of messages from redis * Mark `timeboost` as dangerous since it is a work in progress Timeboost is not completely finished yet, so not recommended for production use. * Fix configuration update * bound the search for pending messages in redis during a switchover * fix failing tests * update timeboost config in system_test * merge conflicts fix * Do not modify gas cap if already `0` (infinite) * update pin * update geth pin * remove block metadata from sepolia * Allow BlockMetadata to be generated when timeboost isn't enabled * fix lint error in systemTest * address PR comments * Start tracking metadata in sepolia Add configuration for block to start tracking sepolia metadata. * update stake-token to WETH on L1 * Fix syncMonitor's BlockMetadataByNumber response for arb classic block numbers * Make get_logs call in small chunks * fix number of block * Use last confirm assertion in watcher to find the starting point * update bold * fix lint * fix lint * update default * Add trackers for missing block metadata retroactively * Hex encode bidder signatures in S3 published CSVs Previously the signature field, which is just raw bytes, was being written directly into the bid history CSVs which are published to S3 by the auctioneer. The uploaded files are for consumption by external parties and not used by the auctioneer after they have been uploaded so no changes are needed to read the files. This change is being made pre-mainnet so we don't need to worry about fixing existing files. (cherry picked from commit 6a7b125) * Stylus: visit all relevant contracts if deterministic * Fix deadlock caused by txStreamer trying to broadcast an executed message during a stopAndWait * change impl to handle deadlock by draining broadcastChan upon context cancellation * typo fix * stop broadcastServer after txStreamer * document changes * update buildspec to same as master * fix dockerfile * Add new Pectra header Pull in update to go-ethereum that adds new Pectra header * [Backport] Fix auction resolution during a tie * [Backport] Remove timeboost's MaxQueuedTxCount config option * [Backport] ExpressLane submissions are returned results immediately after they are queued and use channels in redisCoordinator to accept push-updates to redis instead of launching goroutines * add missing newLines * [Backport] Implement block-based timeout and set timeboost queued transactions to timeout at the round endtime * Handle getting creationAtBlock for l3 bold [backport OffchainLabs#2967] backport OffchainLabs#2967 and update BoLD repo to latest * Fix backport merge * Update bold fast confirmation based on legacy fast confirmation improvements * Memory improvements in DAS REST client Stream response body to json decoder. Use cancelable context for http get. * fix test * Make config options required for navigating pectra header issues default * [Backport] Increase default value for timeboost.max-future-sequence-distance to 1000 * Revert "Make config options required for navigating pectra header issues default" This reverts commit 5439d26. * [Backport] Express Lane Submission prechecker Original PR: OffchainLabs#3039 * [Backport] Change timeboost_auctionresolution metric to Gauge Auction resolution events happen only once per minute so using a histogram metric for arb_sequencer_timeboost_auction does not really make sense. Use a gauge intead so we can always see the last resolution duration. Originally merged in: OffchainLabs#3067 * Set block to start tracking arb1 block metadata * [Backport] Rectify evm depth modification while faking a call inside StartTxHook for arb type txs * Add RequestsHash to gen_header_json.go * Validate timeboost config options correctly * address PR comments and remove dangerous before timeboost * fix ci failure * Add a node.batch-poster.dangerous.fixed-gas-limit flag There are rare cases where problems with gas estimation could be malfunctioning and we want the ability to override it from a config flag. When --node.batch-poster.dangerous.fixed-gas-limit is non-zero, the value will be used as a the gas limit when posting batches and will completely bypass gas estimation. Resolves: NIT-3225 * batch_poster: override delayed inbox for gat estimation * batch_poster: update and document gas estimation code * [Backport] Make some timeboost errors clearer This is a backport of OffchainLabs#2987 It's not high priority to make it into the release but it makes some errors seen by express lane controllers a litle clearer. * add comment * Fix build * Comment failing test (in nitro) * fix ci * add myself as codeowner * play with the CI * Cherry pick network go PR * Fix go files * Bring back databases from 2 to 1 --------- Signed-off-by: linchizhen <[email protected]> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: Tristan Wilson <[email protected]> Co-authored-by: Pepper Lebeck-Jobe <[email protected]> Co-authored-by: Ganesh Vanahalli <[email protected]> Co-authored-by: Tsahi Zidenberg <[email protected]> Co-authored-by: linchizhen <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Raul Jordan <[email protected]> Co-authored-by: Tsahi Zidenberg <[email protected]> Co-authored-by: Joshua Colvin <[email protected]> Co-authored-by: Diego Ximenes <[email protected]> Co-authored-by: Tristan-Wilson <[email protected]> Co-authored-by: Derek Lee <[email protected]> Co-authored-by: Aman Sanghi <[email protected]> Co-authored-by: Pepper Lebeck-Jobe <[email protected]> Co-authored-by: Sneh Koul <[email protected]>
1 parent f17eaca commit ad7d2ac

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+1544
-883
lines changed

.github/buildspec.yml

+4
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,9 @@
11
version: 0.2
22

3+
env:
4+
exported-variables:
5+
- IMAGE_TAG
6+
37
phases:
48
pre_build:
59
commands:

.github/manifest-buildspec.yml

+57
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
version: 0.2
2+
3+
phases:
4+
pre_build:
5+
commands:
6+
- COMMIT_HASH=$(git rev-parse --short=7 HEAD || echo "latest")
7+
- VERSION_TAG=$(git tag --points-at HEAD | sed '/-/!s/$/_/' | sort -rV | sed 's/_$//' | head -n 1 | grep ^ || git show -s --pretty=%D | sed 's/, /\n/g' | grep -v '^origin/' |grep -v '^grafted\|HEAD\|master\|main$' || echo "dev")
8+
- NITRO_VERSION=${VERSION_TAG}-${COMMIT_HASH}
9+
- IMAGE_TAG=${NITRO_VERSION}
10+
11+
# Log IMAGE_TAG environment variable
12+
- echo "Using IMAGE_TAG environment variable $IMAGE_TAG"
13+
14+
# Login to ECR
15+
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $REPOSITORY_URI
16+
# Login to DockerHub if credentials provided
17+
- |
18+
if [ -n "$DOCKERHUB_USERNAME" ] && [ -n "$DOCKERHUB_PASSWORD" ]; then
19+
echo "$DOCKERHUB_PASSWORD" | docker login --username "$DOCKERHUB_USERNAME" --password-stdin
20+
fi
21+
# Enable experimental features while preserving auth
22+
- mkdir -p $HOME/.docker
23+
- |
24+
if [ -f "$HOME/.docker/config.json" ]; then
25+
# Add experimental flag to existing config
26+
cat $HOME/.docker/config.json | jq '. + {"experimental":"enabled"}' > $HOME/.docker/config.json.tmp
27+
mv $HOME/.docker/config.json.tmp $HOME/.docker/config.json
28+
else
29+
# Create new config with experimental flag
30+
echo '{"experimental":"enabled"}' > $HOME/.docker/config.json
31+
fi
32+
33+
build:
34+
commands:
35+
# Regular node image
36+
- docker manifest create $REPOSITORY_URI:$IMAGE_TAG $REPOSITORY_URI:$IMAGE_TAG-amd64 $REPOSITORY_URI:$IMAGE_TAG-arm64
37+
- docker manifest annotate $REPOSITORY_URI:$IMAGE_TAG $REPOSITORY_URI:$IMAGE_TAG-amd64 --arch amd64
38+
- docker manifest annotate $REPOSITORY_URI:$IMAGE_TAG $REPOSITORY_URI:$IMAGE_TAG-arm64 --arch arm64
39+
- docker manifest push $REPOSITORY_URI:$IMAGE_TAG
40+
41+
# Slim variant
42+
- docker manifest create $REPOSITORY_URI:$IMAGE_TAG-slim $REPOSITORY_URI:$IMAGE_TAG-slim-amd64 $REPOSITORY_URI:$IMAGE_TAG-slim-arm64
43+
- docker manifest annotate $REPOSITORY_URI:$IMAGE_TAG-slim $REPOSITORY_URI:$IMAGE_TAG-slim-amd64 --arch amd64
44+
- docker manifest annotate $REPOSITORY_URI:$IMAGE_TAG-slim $REPOSITORY_URI:$IMAGE_TAG-slim-arm64 --arch arm64
45+
- docker manifest push $REPOSITORY_URI:$IMAGE_TAG-slim
46+
47+
# Dev variant
48+
- docker manifest create $REPOSITORY_URI:$IMAGE_TAG-dev $REPOSITORY_URI:$IMAGE_TAG-dev-amd64 $REPOSITORY_URI:$IMAGE_TAG-dev-arm64
49+
- docker manifest annotate $REPOSITORY_URI:$IMAGE_TAG-dev $REPOSITORY_URI:$IMAGE_TAG-dev-amd64 --arch amd64
50+
- docker manifest annotate $REPOSITORY_URI:$IMAGE_TAG-dev $REPOSITORY_URI:$IMAGE_TAG-dev-arm64 --arch arm64
51+
- docker manifest push $REPOSITORY_URI:$IMAGE_TAG-dev
52+
53+
# Validator variant
54+
- docker manifest create $REPOSITORY_URI:$IMAGE_TAG-validator $REPOSITORY_URI:$IMAGE_TAG-validator-amd64 $REPOSITORY_URI:$IMAGE_TAG-validator-arm64
55+
- docker manifest annotate $REPOSITORY_URI:$IMAGE_TAG-validator $REPOSITORY_URI:$IMAGE_TAG-validator-amd64 --arch amd64
56+
- docker manifest annotate $REPOSITORY_URI:$IMAGE_TAG-validator $REPOSITORY_URI:$IMAGE_TAG-validator-arm64 --arch arm64
57+
- docker manifest push $REPOSITORY_URI:$IMAGE_TAG-validator

.github/workflows/ci.yml

+4
Original file line numberDiff line numberDiff line change
@@ -127,6 +127,9 @@ jobs:
127127

128128
- name: Install Foundry
129129
uses: foundry-rs/foundry-toolchain@v1
130+
131+
- name: Install cbindgen
132+
run: cargo install --force cbindgen
130133

131134
- name: Install cbindgen
132135
run: cargo install --force cbindgen
@@ -241,6 +244,7 @@ jobs:
241244
run: |
242245
make build-prover-bin
243246
target/bin/prover target/machines/latest/machine.wavm.br -b --json-inputs="${{ github.workspace }}/target/TestProgramStorage/block_inputs.json"
247+
244248
- name: run jit prover on block input json
245249
if: matrix.test-mode == 'defaults'
246250
run: |

.github/workflows/codeql-analysis.yml

+3
Original file line numberDiff line numberDiff line change
@@ -86,6 +86,9 @@ jobs:
8686
- name: Install cbindgen
8787
run: cargo install --force cbindgen
8888

89+
- name: Install cbindgen
90+
run: cargo install --force cbindgen
91+
8992
- name: Cache Rust Build Products
9093
uses: actions/cache@v3
9194
with:

CODEOWNERS

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@
22
# later match takes precedence, they will be requested for review when someone
33
# opens a pull request.
44

5-
* @ImJeremyHe @nomaxg @alysiahuggins @zacshowa @Sneh1999 @sveitser @jbearer
5+
* @ImJeremyHe @nomaxg @alysiahuggins @zacshowa @Sneh1999 @sveitser @jbearer @jjeangal

Makefile

-5
Original file line numberDiff line numberDiff line change
@@ -596,11 +596,6 @@ $(stylus_test_hostio-test_wasm): $(stylus_test_hostio-test_src)
596596
./scripts/remove_reference_types.sh $@
597597
@touch -c $@ # cargo might decide to not rebuild the binary
598598

599-
$(stylus_test_hostio-test_wasm): $(stylus_test_hostio-test_src)
600-
$(cargo_nightly) --manifest-path $< --release --config $(stylus_cargo)
601-
./scripts/remove_reference_types.sh $@
602-
@touch -c $@ # cargo might decide to not rebuild the binary
603-
604599
contracts/test/prover/proofs/float%.json: $(arbitrator_cases)/float%.wasm $(prover_bin) $(output_latest)/soft-float.wasm
605600
$(prover_bin) $< -l $(output_latest)/soft-float.wasm -o $@ -b --allow-hostapi --require-success
606601

arbnode/batch_poster.go

+95-53
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,6 @@ import (
3434
"github.com/ethereum/go-ethereum/rlp"
3535
"github.com/ethereum/go-ethereum/rpc"
3636
"github.com/offchainlabs/bold/solgen/go/bridgegen"
37-
3837
"github.com/offchainlabs/nitro/arbnode/dataposter"
3938
"github.com/offchainlabs/nitro/arbnode/dataposter/storage"
4039
"github.com/offchainlabs/nitro/arbnode/redislock"
@@ -143,7 +142,8 @@ const (
143142
)
144143

145144
type BatchPosterDangerousConfig struct {
146-
AllowPostingFirstBatchWhenSequencerMessageCountMismatch bool `koanf:"allow-posting-first-batch-when-sequencer-message-count-mismatch"`
145+
AllowPostingFirstBatchWhenSequencerMessageCountMismatch bool `koanf:"allow-posting-first-batch-when-sequencer-message-count-mismatch"`
146+
FixedGasLimit uint64 `koanf:"fixed-gas-limit"`
147147
}
148148

149149
type BatchPosterConfig struct {
@@ -229,6 +229,7 @@ type BatchPosterConfigFetcher func() *BatchPosterConfig
229229

230230
func DangerousBatchPosterConfigAddOptions(prefix string, f *pflag.FlagSet) {
231231
f.Bool(prefix+".allow-posting-first-batch-when-sequencer-message-count-mismatch", DefaultBatchPosterConfig.Dangerous.AllowPostingFirstBatchWhenSequencerMessageCountMismatch, "allow posting the first batch even if sequence number doesn't match chain (useful after force-inclusion)")
232+
f.Uint64(prefix+".fixed-gas-limit", DefaultBatchPosterConfig.Dangerous.FixedGasLimit, "use this gas limit for batch posting instead of estimating it")
232233
}
233234

234235
func BatchPosterConfigAddOptions(prefix string, f *pflag.FlagSet) {
@@ -1182,73 +1183,85 @@ type estimateGasParams struct {
11821183
BlobHashes []common.Hash `json:"blobVersionedHashes,omitempty"`
11831184
}
11841185

1186+
type OverrideAccount struct {
1187+
StateDiff map[common.Hash]common.Hash `json:"stateDiff"`
1188+
}
1189+
1190+
type StateOverride map[common.Address]OverrideAccount
1191+
11851192
func estimateGas(client rpc.ClientInterface, ctx context.Context, params estimateGasParams) (uint64, error) {
11861193
var gas hexutil.Uint64
11871194
err := client.CallContext(ctx, &gas, "eth_estimateGas", params)
11881195
return uint64(gas), err
11891196
}
11901197

1191-
func (b *BatchPoster) estimateGas(
1198+
func (b *BatchPoster) estimateGasSimple(
11921199
ctx context.Context,
1193-
sequencerMessage []byte,
1194-
delayedMessages uint64,
11951200
realData []byte,
11961201
realBlobs []kzg4844.Blob,
1197-
realNonce uint64,
11981202
realAccessList types.AccessList,
1199-
delayProof *bridgegen.DelayProof,
12001203
) (uint64, error) {
12011204

12021205
config := b.config()
12031206
rpcClient := b.l1Reader.Client()
12041207
rawRpcClient := rpcClient.Client()
1205-
useNormalEstimation := b.dataPoster.MaxMempoolTransactions() == 1
1206-
if !useNormalEstimation {
1207-
// Check if we can use normal estimation anyways because we're at the latest nonce
1208-
latestNonce, err := rpcClient.NonceAt(ctx, b.dataPoster.Sender(), nil)
1209-
if err != nil {
1210-
return 0, err
1211-
}
1212-
useNormalEstimation = latestNonce == realNonce
1213-
}
12141208
latestHeader, err := rpcClient.HeaderByNumber(ctx, nil)
12151209
if err != nil {
12161210
return 0, err
12171211
}
12181212
maxFeePerGas := arbmath.BigMulByUBips(latestHeader.BaseFee, config.GasEstimateBaseFeeMultipleBips)
1219-
if useNormalEstimation {
1220-
_, realBlobHashes, err := blobs.ComputeCommitmentsAndHashes(realBlobs)
1221-
if err != nil {
1222-
return 0, fmt.Errorf("failed to compute real blob commitments: %w", err)
1223-
}
1224-
// If we're at the latest nonce, we can skip the special future tx estimate stuff
1225-
gas, err := estimateGas(rawRpcClient, ctx, estimateGasParams{
1226-
From: b.dataPoster.Sender(),
1227-
To: &b.seqInboxAddr,
1228-
Data: realData,
1229-
MaxFeePerGas: (*hexutil.Big)(maxFeePerGas),
1230-
BlobHashes: realBlobHashes,
1231-
AccessList: realAccessList,
1232-
})
1233-
if err != nil {
1234-
return 0, fmt.Errorf("%w: %w", ErrNormalGasEstimationFailed, err)
1235-
}
1236-
return gas + config.ExtraBatchGas, nil
1213+
_, realBlobHashes, err := blobs.ComputeCommitmentsAndHashes(realBlobs)
1214+
if err != nil {
1215+
return 0, fmt.Errorf("failed to compute real blob commitments: %w", err)
1216+
}
1217+
// If we're at the latest nonce, we can skip the special future tx estimate stuff
1218+
gas, err := estimateGas(rawRpcClient, ctx, estimateGasParams{
1219+
From: b.dataPoster.Sender(),
1220+
To: &b.seqInboxAddr,
1221+
Data: realData,
1222+
MaxFeePerGas: (*hexutil.Big)(maxFeePerGas),
1223+
BlobHashes: realBlobHashes,
1224+
AccessList: realAccessList,
1225+
})
1226+
if err != nil {
1227+
return 0, fmt.Errorf("%w: %w", ErrNormalGasEstimationFailed, err)
1228+
}
1229+
return gas + config.ExtraBatchGas, nil
1230+
}
1231+
1232+
// This estimates gas for a batch with future nonce
1233+
// a prev. batch is already pending in the parent chain's mempool
1234+
func (b *BatchPoster) estimateGasForFutureTx(
1235+
ctx context.Context,
1236+
sequencerMessage []byte,
1237+
delayedMessagesBefore uint64,
1238+
delayedMessagesAfter uint64,
1239+
realAccessList types.AccessList,
1240+
usingBlobs bool,
1241+
delayProof *bridgegen.DelayProof,
1242+
) (uint64, error) {
1243+
config := b.config()
1244+
rpcClient := b.l1Reader.Client()
1245+
rawRpcClient := rpcClient.Client()
1246+
latestHeader, err := rpcClient.HeaderByNumber(ctx, nil)
1247+
if err != nil {
1248+
return 0, err
12371249
}
1250+
maxFeePerGas := arbmath.BigMulByUBips(latestHeader.BaseFee, config.GasEstimateBaseFeeMultipleBips)
12381251

12391252
// Here we set seqNum to MaxUint256, and prevMsgNum to 0, because it disables the smart contracts' consistency checks.
12401253
// However, we set nextMsgNum to 1 because it is necessary for a correct estimation for the final to be non-zero.
12411254
// Because we're likely estimating against older state, this might not be the actual next message,
12421255
// but the gas used should be the same.
1243-
data, kzgBlobs, err := b.encodeAddBatch(abi.MaxUint256, 0, 1, sequencerMessage, delayedMessages, len(realBlobs) > 0, delayProof)
1256+
data, kzgBlobs, err := b.encodeAddBatch(abi.MaxUint256, 0, 1, sequencerMessage, delayedMessagesAfter, usingBlobs, delayProof)
12441257
if err != nil {
12451258
return 0, err
12461259
}
12471260
_, blobHashes, err := blobs.ComputeCommitmentsAndHashes(kzgBlobs)
12481261
if err != nil {
12491262
return 0, fmt.Errorf("failed to compute blob commitments: %w", err)
12501263
}
1251-
gas, err := estimateGas(rawRpcClient, ctx, estimateGasParams{
1264+
gasParams := estimateGasParams{
12521265
From: b.dataPoster.Sender(),
12531266
To: &b.seqInboxAddr,
12541267
Data: data,
@@ -1257,7 +1270,22 @@ func (b *BatchPoster) estimateGas(
12571270
// This isn't perfect because we're probably estimating the batch at a different sequence number,
12581271
// but it should overestimate rather than underestimate which is fine.
12591272
AccessList: realAccessList,
1260-
})
1273+
}
1274+
// slot 0 in the SequencerInbox smart contract holds totalDelayedMessagesRead -
1275+
// This is the number of delayed messages that sequencer knows were processed
1276+
// SequencerInbox checks this value to make sure delayed inbox isn't going backward,
1277+
// And it makes it know if a delayProof is needed
1278+
// Both are required for successful batch posting
1279+
stateOverride := StateOverride{
1280+
b.seqInboxAddr: {
1281+
StateDiff: map[common.Hash]common.Hash{
1282+
// slot 0
1283+
{}: common.Hash(arbmath.Uint64ToU256Bytes(delayedMessagesBefore)),
1284+
},
1285+
},
1286+
}
1287+
var gas hexutil.Uint64
1288+
err = rawRpcClient.CallContext(ctx, &gas, "eth_estimateGas", gasParams, rpc.PendingBlockNumber, stateOverride)
12611289
if err != nil {
12621290
sequencerMessageHeader := sequencerMessage
12631291
if len(sequencerMessageHeader) > 33 {
@@ -1266,13 +1294,14 @@ func (b *BatchPoster) estimateGas(
12661294
log.Warn(
12671295
"error estimating gas for batch",
12681296
"err", err,
1269-
"delayedMessages", delayedMessages,
1297+
"delayedMessagesBefore", delayedMessagesBefore,
1298+
"delayedMessagesAfter", delayedMessagesAfter,
12701299
"sequencerMessageHeader", hex.EncodeToString(sequencerMessageHeader),
12711300
"sequencerMessageLen", len(sequencerMessage),
12721301
)
12731302
return 0, fmt.Errorf("error estimating gas for batch: %w", err)
12741303
}
1275-
return gas + config.ExtraBatchGas, nil
1304+
return uint64(gas) + config.ExtraBatchGas, nil
12761305
}
12771306

12781307
const ethPosBlockTime = 12 * time.Second
@@ -1359,12 +1388,6 @@ func (b *BatchPoster) maybePostSequencerBatch(ctx context.Context) (bool, error)
13591388
return false, nil
13601389
}
13611390

1362-
lastPotentialMsg, err := b.streamer.GetMessage(msgCount - 1)
1363-
if err != nil {
1364-
1365-
return false, err
1366-
}
1367-
13681391
config := b.config()
13691392
forcePostBatch := config.MaxDelay <= 0
13701393

@@ -1651,15 +1674,34 @@ func (b *BatchPoster) maybePostSequencerBatch(ctx context.Context) (bool, error)
16511674
return false, fmt.Errorf("produced %v blobs for batch but a block can only hold %v (compressed batch was %v bytes long)", len(kzgBlobs), params.MaxBlobGasPerBlock/params.BlobTxBlobGasPerBlob, len(sequencerMsg))
16521675
}
16531676
accessList := b.accessList(batchPosition.NextSeqNum, b.building.segments.delayedMsg)
1677+
var gasLimit uint64
1678+
if b.config().Dangerous.FixedGasLimit != 0 {
1679+
gasLimit = b.config().Dangerous.FixedGasLimit
1680+
} else {
1681+
useSimpleEstimation := b.dataPoster.MaxMempoolTransactions() == 1
1682+
if !useSimpleEstimation {
1683+
// Check if we can use normal estimation anyways because we're at the latest nonce
1684+
latestNonce, err := b.l1Reader.Client().NonceAt(ctx, b.dataPoster.Sender(), nil)
1685+
if err != nil {
1686+
return false, err
1687+
}
1688+
useSimpleEstimation = latestNonce == nonce
1689+
}
16541690

1655-
// On restart, we may be trying to estimate gas for a batch whose successor has
1656-
// already made it into pending state, if not latest state.
1657-
// In that case, we might get a revert with `DelayedBackwards()`.
1658-
// To avoid that, we artificially increase the delayed messages to `lastPotentialMsg.DelayedMessagesRead`.
1659-
// In theory, this might reduce gas usage, but only by a factor that's already
1660-
// accounted for in `config.ExtraBatchGas`, as that same factor can appear if a user
1661-
// posts a new delayed message that we didn't see while gas estimating.
1662-
gasLimit, err := b.estimateGas(ctx, sequencerMsg, lastPotentialMsg.DelayedMessagesRead, data, kzgBlobs, nonce, accessList, delayProof)
1691+
if useSimpleEstimation {
1692+
gasLimit, err = b.estimateGasSimple(ctx, data, kzgBlobs, accessList)
1693+
} else {
1694+
// When there are previous batches queued up in the dataPoster, we override the delayed message count in the sequencer inbox
1695+
// so it accepts the corresponding delay proof. Otherwise, the gas estimation would revert.
1696+
var delayedMsgBefore uint64
1697+
if b.building.firstDelayedMsg != nil {
1698+
delayedMsgBefore = b.building.firstDelayedMsg.DelayedMessagesRead - 1
1699+
} else if b.building.firstNonDelayedMsg != nil {
1700+
delayedMsgBefore = b.building.firstNonDelayedMsg.DelayedMessagesRead
1701+
}
1702+
gasLimit, err = b.estimateGasForFutureTx(ctx, sequencerMsg, delayedMsgBefore, b.building.segments.delayedMsg, accessList, len(kzgBlobs) > 0, delayProof)
1703+
}
1704+
}
16631705
if err != nil {
16641706
return false, err
16651707
}

arbnode/node.go

+5-3
Original file line numberDiff line numberDiff line change
@@ -1082,9 +1082,6 @@ func (n *Node) StopAndWait() {
10821082
if n.MessagePruner != nil && n.MessagePruner.Started() {
10831083
n.MessagePruner.StopAndWait()
10841084
}
1085-
if n.BroadcastServer != nil && n.BroadcastServer.Started() {
1086-
n.BroadcastServer.StopAndWait()
1087-
}
10881085
if n.BroadcastClients != nil {
10891086
n.BroadcastClients.StopAndWait()
10901087
}
@@ -1106,6 +1103,11 @@ func (n *Node) StopAndWait() {
11061103
if n.TxStreamer.Started() {
11071104
n.TxStreamer.StopAndWait()
11081105
}
1106+
// n.BroadcastServer is stopped after txStreamer and inboxReader because if done before it would lead to a deadlock, as the threads from these two components
1107+
// attempt to Broadcast i.e send feedMessage to clientManager's broadcastChan when there wont be any reader to read it as n.BroadcastServer would've been stopped
1108+
if n.BroadcastServer != nil && n.BroadcastServer.Started() {
1109+
n.BroadcastServer.StopAndWait()
1110+
}
11091111
if n.SeqCoordinator != nil && n.SeqCoordinator.Started() {
11101112
// Just stops the redis client (most other stuff was stopped earlier)
11111113
n.SeqCoordinator.StopAndWait()

0 commit comments

Comments
 (0)