Skip to content

eth/catalyst: implement getBlobsV2 #31791

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

MariusVanDerWijden
Copy link
Member

@MariusVanDerWijden MariusVanDerWijden commented May 9, 2025

Implements engine_getBlobsV2 which is needed for PeerDAS

log.Warn("Encountered Version 0 transaction post-Osaka, recompute proofs", "hash", ltx.Hash)
sidecar.Proofs = make([]kzg4844.Proof, 0)
for _, blob := range sidecar.Blobs {
cellProofs, err := kzg4844.ComputeCells(&blob)
Copy link
Contributor

@jwasinger jwasinger May 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When we pass the fork, as long as the prioritized blob transactions are v0, we will be computing cell proofs for each bob they contain.

I've benchmarked this and it's pretty slow: 200-300ms depending on the machine (i've tested my m2 mbp and a server machine I'm using to run a perfnet node).

Might be safer to just disallow mining v0 transactions after we've passed the fork , and also purge them from the blobpool.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that is kinda expected. Other clients are just dropping the transactions, but I think we should recompute the proofs. Its still within the limits that we can do it for a bit and only the block producing machine will have to do it. I think we should take the high route and not invalidate the valid transactions at the fork point

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As part of this PR, we will need to update the billy shelf "slotter" to consider the size of cell proofs after the fork. I'm not sure how complex it would be to accommodate both v0/v1 transactions, but IMO it helps support the case for purging v0 transactions at the fork block.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep you're right. The current slotter will create shelves of the following sizes:

level size
0 4096
1 135168
2 266240
3 397312
4 528384
5 659456
6 790528
7 921600
8 1052672
9 1183744
10 1314816
11 1445888
12 1576960
13 1708032
14 1839104

RLP encoding a blob transaction with 300 byte of calldata will result in the following sizes:

NumBlobs pre post
0 372 372
1 131584 133105
2 262794 265833
3 394001 398559
4 525208 531285
5 656415 664011
6 787624 796738
10 1312453 1327643
14 1837281 1858547
20 2624523 2654903

So the size will not match up for transactions containing 4 blobs already.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a change now to address this, it will result in the following level sizes for the slotter:
[4096 137216 270336 403456 536576 669696 802816 935936 1069056 1202176 1335296 1468416 1601536 1734656 1867776] Which at level 14 is ~8kb bigger than the encoded transaction with 300 byte of calldata. Since I expect bigger blob transactions to contain more calldata, I think thats the right way to go

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to test though whether we can easily change the slot size for existing nodes

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Turns out the shelves are named after their size, so we will just open new shelves and just not have the old transactions anymore

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Solution: just open a second v2 billy instance and migrate txs from the old one on startup. The txs will have an overhead with just empty space but it's OK.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've decided to remove the commits addressing this and do the slotter migration in a follow-up PR here: #31966

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants