-
Notifications
You must be signed in to change notification settings - Fork 1.1k
statement-store: make encode/hash faster #10882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
AndreiEres
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we reduce calls of hash() then and pass it into arguments where it's possible?
| queue_sender, | ||
| metrics: None, | ||
| initial_sync_timeout: Box::pin(tokio::time::sleep(INITIAL_SYNC_BURST_INTERVAL).fuse()), | ||
| initial_sync_timeout: Box::pin(pending().fuse()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How it's related?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
benchmark was crashing because tokio time wants a tokio runtime.
| self.num_topics as u32; | ||
|
|
||
| let mut output = Vec::new(); | ||
| // Calculate capacity for preallocation using size_of for type sizes: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rust size or SCALE-encoded size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the SCALE-encoded is what we want and this is a close approximation without actually scale encoding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you put it in a comment please?
Co-authored-by: Andrei Eres <eresav@me.com>
By reserving the memory in advance we halve the encoding speed which ultimately speeds up the statement.hash() function which gets called in a lot of places.
More importantly, when we start being connected to more nodes the hash function gets called a lot for the same statement because we might receive the same statement from all peers we are connected to.
For example on versi on_statements ate a lot of time when running with 15 nodes, see #10814 (comment).
Modified the statement_network benchmark to also be parameterizable by the number of times we might receive a statement and if we receive it from 16 peers, we notice a speed up with this PR of ~16%, which I consider not negligible, so I consider this an worthy improvement.