mpt: awasm noble keccak | Benchmark comparison#4278
Conversation
Adds a dedicated keccak256 throughput benchmark comparing @noble/hashes (current baseline) against the three variants shipped by @awasm/noble: wasm, faster-JS, and wasm_threads. Run with: npx tsx benchmarks/keccak.ts
Single-process measurement made the @noble/hashes baseline go polymorphic across the four hash fns and get deopted, skewing the comparison. Each impl now runs in its own child so V8 can monomorphize the call site. Also switched to a fixed-duration measure loop and a summary row of min/avg/max speedup vs baseline.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
📦 Bundle Size Analysis
Values are minified+gzipped bundles of each package entry. Workspace deps are bundled; external deps are excluded. Generated by bundle-size workflow |
|
This is really interesting. I think we should not directly fully switching, people are likely not that quick with updating their "no WASM in our libraries" policies (this can realistically still take quite some time), but I guess we could do a second "on the libraries" run/test using our custom crypto logic/handing from Common and - if this plays out in a similar way - integrate this structurally and really prominently in the docs that people should use if they care about performance. @paulmillr Do you regard these libraries already as mature/safe or are there rather still in the "be a bit careful for the first months" period? |
|
@holgerd77 I would say it's production-ready for ordinary apps. It passes all noble tests. It's fine to use it as a secondary optional backend. For wallets specifically, I would wait for a year or so. For something like "building an eth node" it's fine to use it today. Year because that's approximation it would take for a community adoption, which could happen earlier, or later. Perhaps someone would also audit it, although LLM-assisted self-audits were much more productive versus third-parties. |
|
@paulmillr makes sense, thanks! |
This PR adds some awasm noble keccak benchmarking as suggested by @paulmillr in #3227
Here are the benchmarks results on my machine (temporarily from an old macbook pro so your mileage may vary)