You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Reward + Gov] Explore if we can spread reward calculation and governance calculation, instead of calculating everything during the epoch transition
#464
Open
satran004 opened this issue
Mar 4, 2025
· 0 comments
Currently, both reward calculation and governance state calculation occur during the epoch transition.
For example, to calculate rewards/adapots for epoch e, we start fetching the required data and trigger the calculation during the epoch transition from epoch e-1 to e.
However, since most of the data needed for the adapot calculation of epoch e is already available during epoch e-1, we should explore if we can break the calculation into steps. This would minimize the calculation during the epoch transition (from e-1 to e) and allow the rewards and adapot details to be available as soon as possible, rather than causing a delay (currently around 15-20 minutes).
The same should be evaluated for governance state calculation.
The text was updated successfully, but these errors were encountered:
Currently, both reward calculation and governance state calculation occur during the epoch transition.
For example, to calculate rewards/adapots for epoch
e
, we start fetching the required data and trigger the calculation during the epoch transition from epoche-1
toe
.However, since most of the data needed for the adapot calculation of epoch
e
is already available during epoche-1
, we should explore if we can break the calculation into steps. This would minimize the calculation during the epoch transition (frome-1
toe
) and allow the rewards and adapot details to be available as soon as possible, rather than causing a delay (currently around 15-20 minutes).The same should be evaluated for governance state calculation.
The text was updated successfully, but these errors were encountered: