Skip to content

Commit

Permalink
Update spec to reflect new assignments (Layr-Labs#134)
Browse files Browse the repository at this point in the history
  • Loading branch information
mooselumph authored Dec 21, 2023
1 parent 8876da6 commit 1508b42
Show file tree
Hide file tree
Showing 2 changed files with 50 additions and 67 deletions.
20 changes: 13 additions & 7 deletions docs/spec/data-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,15 @@ type SecurityParam struct {
QuorumID QuorumID
// AdversaryThreshold is the maximum amount of stake that can be controlled by an adversary in the quorum as a percentage of the total stake in the quorum
AdversaryThreshold uint8
// QuorumThreshold is the amount of stake that must sign a message for it to be considered valid as a percentage of the total stake in the quorum
QuorumThreshold uint8 `json:"quorum_threshold"`
}

// QuorumParam contains the quorum ID and the quorum threshold for the quorum
// QuorumResult contains the quorum ID and the amount signed for the quorum
type QuorumResult struct {
QuorumID QuorumID
// QuorumThreshold is the amount of stake that must sign a message for it to be considered valid as a percentage of the total stake in the quorum
QuorumThreshold uint8
// PercentSigned is percentage of the total stake for the quorum that signed for a particular batch.
PercentSigned uint8
}
```

Expand All @@ -37,12 +39,16 @@ type BlobRequestHeader struct {

```go
type BlobHeader struct {
BlobRequestHeader
BlobCommitments
// ChunkLength is the length of each chunk in symbols; all chunks in an encoded blob must be the same length
// QuorumInfos contains the quorum specific parameters for the blob
QuorumInfos []*BlobQuorumInfo
}

// BlobQuorumInfo contains the quorum IDs and parameters for a blob specific to a given quorum
type BlobQuorumInfo struct {
SecurityParam
// ChunkLength is the number of symbols in a chunk
ChunkLength uint
// QuantizationFactor determines the nominal number of chunks; NominalNumChunks = QuantizationFactor * NumOperatorsForQuorum
QuantizationFactor uint
}

// BlomCommitments contains the blob's commitment, degree proof, and the actual degree.
Expand Down
97 changes: 37 additions & 60 deletions docs/spec/protocol-modules/storage/assignment.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,11 @@

# Assignment

The assignment functionality within EigenDA is carried out by the `AssignmentCoordinator` which is responsible for taking the current OperatorState and the security requirements represented by a given QuorumParams and determining or validating system parameters that will satisfy these security requirements given the OperatorStates. There are two classes of parameters that must be determined or validated:
The assignment functionality within EigenDA is carried out by the `AssignmentCoordinator`, which is responsible for taking the current OperatorState and the security requirements represented by a given QuorumParam and determining or validating system parameters that will satisfy these security requirements given the OperatorStates. There are two classes of parameters that must be determined or validated:

1) the chunk indices that will be assigned to each DA node.
2) the length of each chunk (measured in number of symbols). In keeping with the constraint imposed by the Encoding module, all chunks must have the same length, so this parameter is a scalar.

As illustrated in the interface that follows, the assignment of indices does not depend on the security parameters such as quorum threshold and adversary threshold. As these parameters change, the only effect on the resulting assignments will be that the chunk length changes.

The `AssignmentCoordinator` is used by the disperser to determine or validate the `EncodingParams` struct used to encode a data blob, consisting of the total number of chunks (i.e., the total number of indices) and the length of the chunk. We illustrate this in the next section.

## Interface

Expand All @@ -17,86 +14,72 @@ The AssignmentCoordinator must implement the following interface, which facilita
```go
type AssignmentCoordinator interface {

// GetAssignments calculates the full set of node assignments. The assignment of indices to nodes depends only on the OperatorState
// for a given quorum and the quantizationFactor. In particular, it does not depend on the security parameters.
GetAssignments(state *OperatorState, quorumID QuorumID, quantizationFactor uint) (map[OperatorID]Assignment, AssignmentInfo, error)
// GetAssignments calculates the full set of node assignments.
GetAssignments(state *OperatorState, blobLength uint, info *BlobQuorumInfo) (map[OperatorID]Assignment, AssignmentInfo, error)

// GetOperatorAssignment calculates the assignment for a specific DA node
GetOperatorAssignment(state *OperatorState, quorum QuorumID, quantizationFactor uint, id OperatorID) (Assignment, AssignmentInfo, error)
GetOperatorAssignment(state *OperatorState, header *BlobHeader, quorum QuorumID, id OperatorID) (Assignment, AssignmentInfo, error)

// GetMinimumChunkLength calculates the minimum chunkLength that is sufficient for a given blob for each quorum
GetMinimumChunkLength(numOperators, blobLength, quantizationFactor uint, quorumThreshold, adversaryThreshold uint8) uint
// ValidateChunkLength validates that the chunk length for the given quorum satisfies all protocol requirements
ValidateChunkLength(state *OperatorState, header *BlobHeader, quorum QuorumID) (bool, error)

// GetChunkLengthFromHeader calculates the chunk length from the blob header
GetChunkLengthFromHeader(state *OperatorState, header *BlobQuorumInfo) (uint, error)
// CalculateChunkLength calculates the chunk length for the given quorum that satisfies all protocol requirements
CalculateChunkLength(state *OperatorState, blobLength uint, param *SecurityParam) (uint, error)
}
```

The `AssignmentCoordinator` can be used to get the `EncodingParams` struct in the following manner:
## Standard Assignment Security Logic

```go
// quorumThreshold, adversaryThreshold, blobSize, quorumID and quantizationFactor are given
The standard assignment coordinator implements a very simple logic for determining the number of chunks per node and the chunk length, which we describe here. More background concerning this design can be found in the [Design Document](../../../design/assignment.md)

// Get assignments
assignments, info, _ := asn.GetAssignments(state, quorumID, quantizationFactor)

// Get minimum chunk length
blobLength := enc.GetBlobLength(blobSize)
numOperators := uint(len(state.Operators[quorumID]))
chunkLength := asn.GetMinimumChunkLength(numOperators, blobLength, quantizationFactor, quorumThreshold, adversaryThreshold)
**Chunk Length**.

// Get encoding params
params, _ := enc.GetEncodingParams(chunkLength, info.TotalChunks)
```
The protocol requires that chunk lengths are sufficiently small that operators with a small proportion of stake are able to receive a quantity of data commensurate with their stake share. For each operator $i$, let $S_i$ signify the amount of stake held by that operator.

## Standard Assignment Security Logic
We require that the chunk size $C$ satisfy

The standard assignment coordinator implements a very simple logic for determining the number of chunks per node and the chunk length, which we describe here. More background concerning this design can be found in the [Design Document](../../../design/assignment.md)
$$
C \le \text{NextPowerOf2}\left(\frac{B}{\gamma}\max\left(\frac{\min_jS_j}{\sum_jS_j}, \frac{1}{M_\text{max}} \right) \right)
$$

**Index Assignment**.
For each operator $i$, let $S_i$ signify the amount of stake held by that operator. The number of chunks assigned to an operator $i$ is given by the equation

$$m_i = \text{ceil}\left(\frac{\rho nS_i}{\sum_j S_j}\right) \tag{1}$$
where $\gamma = \beta-\alpha$, with $\alpha$ and $\beta$ as defined in the [Storage Overview](./overview.md).

where $n$ is the total number of operators and $\rho$ is called the quantization factor.
This means that as long as an operator has a stake share of at least $1/M_\text{max}$, then the encoded data that they will receive will be within a factor of 2 of their share of stake. Operators with less than $1/M_\text{max}$ of stake will receive no more than a $1/M_\text{max}$ of the encoded data. $M_\text{max}$ represents the maximum number of chunks that the disperser can be required to encode per blob. This limit is included because proving costs scale somewhat super-linearly with the number of chunks.

**Chunk Length**.
To determine the chunk length, we first set the reconstruction threshold at the level of chunks (i.e., the number of chunks from which we wish to be able to reconstruct the original blob) as
In the future, additional constraints on chunk length may be added; for instance, the chunk length may be set in order to maintain a fixed number of chunks per blob across all system states. Currently, the protocol does not mandate a specific value for the chunk length, but will accept the range satisfying the above constraint. The `CalculateChunkLength` function is provided as a convenience function which can be used to find a chunk length satisfying the protocol requirements.

$$m = ceil(n\rho\gamma)$$

where $\gamma = \beta-\alpha$, with $\alpha$ and $\beta$ as defined in the [Storage Overview](./overview.md).

We can then derive the chunk length as $C \ge B/m$, where $B$ is the length of the blob.
**Index Assignment**.

**Correctness**.
Let's show that any set of operators $U_q \setminus U_a$ will have a complete blob. The amount of data held by these operators is given by
For each operator $i$, let $S_i$ signify the amount of stake held by that operator. We want for the number of chunks assigned to operator $i$ to satisfy

$$
\sum_{i \in U_q \setminus U_a} m_i C
\frac{\gamma m_i C}{B} \ge \frac{S_i}{\sum_j S_j}
$$

We first notice from (1) and from the definitions of $U_q$ and $U_a$ that
Let

$$
\sum_{i \in U_q \setminus U_a} m_i \ge n\rho\frac{\sum_{i \in U_q \setminus U_a} S_i}{\sum_jS_j} = n\rho\frac{\sum_{i \in U_q} S_i - \sum_{i \in U_a} S_i}{\sum_jS_j} \ge n\rho(\beta - \alpha) = n\rho\gamma = ceil(n\rho\gamma) = m.
m_i = \text{ceil}\left(\frac{B S_i}{C\gamma \sum_j S_j}\right)\tag{1}
$$

The second from last equality follows because each $m_i$ is an integer, so the summation must be an integer. Substituting for $C$, we see that
**Correctness**.
Let's show that any sets $U_q$ and $U_a$ satisfying the constraints in the [Acceptance Guarantee](./overview.md#acceptance-guarantee), the data held by the operators $U_q \setminus U_a$ will constitute an entire blob. The amount of data held by these operators is given by

$$
\sum_{i \in U_q \setminus U_a} m_i C \ge mC \ge B. \tag{2}
\sum_{i \in U_q \setminus U_a} m_i C
$$

Thus, the reconstruction requirement from the [Encoding](./encoding.md) module is satisfied. Notice that the final inequality of this equation can be written as
We have from (1) and from the definitions of $U_q$ and $U_a$ that

`ceil(EncodedBlobLength*(QuorumThreshold-AdversaryThreshold)) >= BlobLength`
$$
\sum_{i \in U_q \setminus U_a} m_i C \ge =\frac{B}{\gamma}\sum_{i \in U_q \setminus U_a}\frac{S_i}{\sum_j S_j} = \frac{B}{\gamma}\frac{\sum_{i \in U_q} S_i - \sum_{i \in U_a} S_i}{\sum_jS_j} \ge B \frac{\beta-\alpha}{\gamma} = B \tag{2}
$$

with the following mappings:
- `EncodedBlobLength` = $n\rho C$
- `QuorumThreshold` = $\beta$
- `AdversaryThreshold` = $\alpha$
- `BlobLength` = $B$.
Thus, the reconstruction requirement from the [Encoding](./encoding.md) module is satisfied.

## Validation Actions

Expand All @@ -105,19 +88,13 @@ Validation with respect to assignments is performed at different layers of the p
### DA Nodes

When the DA node receives a `StoreChunks` request, it performs the following validation actions relative to each blob header:
- It uses the `ValidateChunkLength` to validate that the `ChunkLength` for the blob satisfies the above constraints.
- It uses `GetOperatorAssignment` to calculate the chunk indices for which it is responsible, and verifies that each of the chunks that it has received lies on the polynomial at these indices (see [Encoding validation actions](./encoding.md#validation-actions))
- It validates that the `Length` contained in the `BlobHeader` is valid. (see [Encoding validation actions](./encoding.md#validation-actions))
- It determines the `ChunkLength` of the received chunks.
- It validates that the `EncodedBlobLength` of the `BlobHeader` satisfies `BlobHeader.EncodedBlobLength = ChunkLength*BlobHeader.QuantizationFactor*State.NumOperators`

This step ensures that each honest node has received the blobs for which it is accountable under the [Standard Assignment Coordinator](#standard-assignment-security-logic), and that the chunk Length and quantization parameters are consistent across all of the honest DA nodes.

### Rollup Smart Contract
This step ensures that each honest node has received the blobs for which it is accountable under the [Standard Assignment Coordinator](#standard-assignment-security-logic).

When the rollup confirms its blob against the EigenDA batch, it performs the following checks for each quorum
Since the DA nodes will allow a range of `ChunkLength` values, as long as they satisfy the constraints of the protocol, it is necessary for there to be consensus on the `ChunkLength` that is in use for a particular blob and quorum. For this reason, the `ChunkLength` is included in the `BlobQuorumParam` which is hashed to create the merkle root contained in the `BatchHeaderHash` signed by the DA nodes.

- Check that `BlobHeader.EncodedBlobLength*(BatchHeader.QuorumThreshold[quorumId] - BlobHeader.AdversaryThreshold) > BlobHeader.Length`

Together, these checks ensure that Equation (2) is satisfied.
### Rollup Smart Contract

The check by the rollup smart contract also serves to ensure that the `QuorumThreshold` for the blob is greater than the `AdversaryThreshold`. This means that if the `EncodedBlobLength` was set incorrectly by the disperser and the adversarial contingent of the DA nodes is within the specified threshold, the batch cannot be confirmed as a sufficient number of nodes will not sign.
When the rollup confirms its blob against the EigenDA batch, it checks that the `QuorumThreshold` for the blob is greater than the `AdversaryThreshold`. This means that if the `ChunkLength` determined by the disperser is invalid, the batch cannot be confirmed as a sufficient number of nodes will not sign.

0 comments on commit 1508b42

Please sign in to comment.