You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So that the block stream is efficient for both transmission and storage
Technical Notes
Hiero File System (HFS) "files" can be quite large, and we do not need to send the full content in state changes because the only options are to either add to the content, remove it, or replace it, and the actual content change is always in the transaction. This implies that we can send only the file metadata in state changes, and calculate the content when updating state.
This makes processing file updates a little more complex and requires a bit more parsing, but also has a quite dramatic impact on the size of block streams when file changes are present (as one example, the monthly update file would explode to over 2 terabytes of block stream data due to being uploaded as many tens of thousands of individual append transactions).
We want to document this approach, and ensure that all corner cases and possible issues are well understood and handled properly.
The text was updated successfully, but these errors were encountered:
Persona
As a Block Stream Designer
Request
I want to minimize the size of the block stream
Goal
So that the block stream is efficient for both transmission and storage
Technical Notes
Hiero File System (HFS) "files" can be quite large, and we do not need to send the full content in state changes because the only options are to either add to the content, remove it, or replace it, and the actual content change is always in the transaction. This implies that we can send only the file metadata in state changes, and calculate the content when updating state.
This makes processing file updates a little more complex and requires a bit more parsing, but also has a quite dramatic impact on the size of block streams when file changes are present (as one example, the monthly update file would explode to over 2 terabytes of block stream data due to being uploaded as many tens of thousands of individual append transactions).
We want to document this approach, and ensure that all corner cases and possible issues are well understood and handled properly.
The text was updated successfully, but these errors were encountered: