Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proof aggregation circuit #523

Closed
wants to merge 44 commits into from
Closed

Conversation

zhenfeizhang
Copy link

@zhenfeizhang zhenfeizhang commented May 31, 2023

Description

  • refactor and overall design (this PR)
  • public input aggregation circuit Public input aggregation circuit #522
  • proof compression circuit (this PR)
  • proof aggregation circuit (this PR)
  • e2e integration (this PR)

ported from https://github.com/scroll-tech/aggregator

Issue Link

[link issue here]

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Contents

Rationale

[design decisions and extended information]

How Has This Been Tested?

Blocked by #522

@zhenfeizhang zhenfeizhang force-pushed the proof-aggregation-circuit branch from a9db3f5 to 19a7110 Compare June 6, 2023 16:50
@zhenfeizhang zhenfeizhang marked this pull request as ready for review June 6, 2023 16:50
@zhenfeizhang zhenfeizhang mentioned this pull request Jun 6, 2023
4 tasks
@zhenfeizhang zhenfeizhang force-pushed the proof-aggregation-circuit branch from 6d6209e to 4d66cd9 Compare June 6, 2023 17:08
@zhenfeizhang zhenfeizhang force-pushed the proof-aggregation-circuit branch from 4c11fb7 to c0f5b94 Compare June 6, 2023 19:18
@lispc
Copy link

lispc commented Jun 7, 2023

i checked the changes in Cargo.lock and i noticed there are 2 version of halo2-lib used. It works but may brings some drawbacks, it can leads to bigger binary size and slower build time, it may even trigger some subtle problem in the future. I would suggest use some branch of halo2-lib, either "minimize-dif" or "halo2-ecc-snark-verifier-0323"

image

@lispc
Copy link

lispc commented Jun 21, 2023

the conflicts seems strange... i will resolve it...

/// Instance for public input; stores
/// - accumulator from aggregation (12 elements)
/// - aggregated public inputs (136 elements):
/// chain_id ||

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

chain_id is now put at last for instance in accordance with

// last 8 inputs are the chain id

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the compression process, pi are carried through without change. In the aggregation process, pi are aggregated using pi agg circuit and snarks are aggregated using KzgAs. Then two parts are put together in proof agg circuit. It may be a good idea to illustrate these inside the figure?

huwenqing0606 and others added 4 commits June 21, 2023 12:16
* a few comments inside core function for proof compression: extract_accumulators_and_proof

* comments on proof compression circuit

* comments on pi aggregation circuit, especially calculation of hash indexes when using Keccak circuit, which are hard-coded here so needs be very careful

* comments on proof aggregation circuit

* a few comments inside core function for proof compression: extract_accumulators_and_proof

* comments on proof compression circuit

* comments on pi aggregation circuit, especially calculation of hash indexes when using Keccak circuit, which are hard-coded here so needs be very careful

* comments on proof aggregation circuit

* clean up all commits from my side

* clean up of all commits on my side
// chunk[k-1].withdraw_root ||
// batch_data_hash )
let preimage = [
chunk_hashes[0].chain_id.to_le_bytes().as_ref(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Solidity always uses the big-endian encoding.

Suggested change
chunk_hashes[0].chain_id.to_le_bytes().as_ref(),
chunk_hashes[0].chain_id.to_be_bytes().as_ref(),

@zhenfeizhang zhenfeizhang force-pushed the proof-aggregation-circuit branch from da99d43 to 65912f6 Compare June 22, 2023 13:42
@@ -0,0 +1 @@
{"strategy":"Simple","degree":26,"num_advice":[1],"num_lookup_advice":[1],"num_fixed":1,"lookup_bits":20,"limb_bits":88,"num_limbs":3}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember @lispc mentioned that degree 25 would also work?

}
}

pub(crate) fn _compress_wide_param() -> Self {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is _compress_wide_param and _compress_thin_param just for test cases?

// data hash + public_input_hash = snark public input
assert_eq!(
Fr::from(chunk.data_hash.as_bytes()[i] as u64),
snark_hash_bytes[i + 20]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

replace 20 by SNARK_DATA_HASH_OFFSET


assert_eq!(
Fr::from(chunk_hash_bytes[i] as u64),
snark_hash_bytes[i + 52]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

replace 52 by SNARK_PI_HASH_OFFSET

* Add a convertion witness Block to `ChunkHash`.

* Fix lint.
@lispc
Copy link

lispc commented Jul 4, 2023

the compression part has been merged

@zhenfeizhang zhenfeizhang mentioned this pull request Jul 24, 2023
11 tasks
@zhenfeizhang
Copy link
Author

replaced with #670 Static proof aggregation

@zhenfeizhang
Copy link
Author

closed as #670 is merged

@zhenfeizhang zhenfeizhang deleted the proof-aggregation-circuit branch August 10, 2023 23:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants