Welcome
Welcome to the Rollkit Specifications.
Protocol/Component Name
Abstract
Provide a concise description of the purpose of the component for which the specification is written, along with its contribution to the rollkit or other relevant parts of the system. Make sure to include proper references to the relevant sections.
Protocol/Component Description
Offer a comprehensive explanation of the protocol, covering aspects such as data flow, communication mechanisms, and any other details necessary for understanding the inner workings of this component.
Message Structure/Communication Format
If this particular component is expected to communicate over the network, outline the structure of the message protocol, including details such as field interpretation, message format, and any other relevant information.
Assumptions and Considerations
If there are any assumptions required for the component's correct operation, performance, security, or other expected features, outline them here. Additionally, provide any relevant considerations related to security or other concerns.
Implementation
Include a link to the location where the implementation of this protocol can be found. Note that specific implementation details should be documented in the rollkit repository rather than in the specification document.
References
List any references used or cited in the document.
General Tips
How to use a mermaid diagram that you can display in a markdown
sequenceDiagram
title Example
participant A
participant B
A->>B: Example
B->>A: Example
graph LR
A[Example] --> B[Example]
B --> C[Example]
C --> A
gantt
title Example
dateFormat YYYY-MM-DD
section Example
A :done, des1, 2014-01-06,2014-01-08
B :done, des2, 2014-01-06,2014-01-08
C :done, des3, 2014-01-06,2014-01-08
Grammar and spelling check
The recommendation is to use your favorite spellchecker extension in your IDE like grammarly, to make sure that the document is free of spelling and grammar errors.
Use of links
If you want to use links use proper syntax. This goes for both internal and external links like documentation or external links
At the bottom of the document in Reference, you can add the following footnotes that will be visible in the markdown document:
[1] Grammarly
[2] Documentation
[3] external links
Then at the bottom add the actual links that will not be visible in the markdown document:
Use of tables
If you are describing variables, components or other things in a structured list that can be described in a table use the following syntax:
Name | Type | Description |
---|---|---|
name | type | Description |
Rollkit Dependency Graph
We use the following color coding in this Graph:
- No Colour: Work not yet started
- Yellow Box: Work in progress
- Green Box: Work completed or at least unblocking the next dependency
- Red Border: Work needs to happen in cooperation with another team
If the EPICs are not linked to the box yet, it means that this box has currently no priority or is still in the ideation phase or the dependency is unclear.
Block Manager
Abstract
The block manager is a key component of full nodes and is responsible for block production or block syncing depending on the node type: sequencer or non-sequencer. Block syncing in this context includes retrieving the published blocks from the network (P2P network or DA network), validating them to raise fraud proofs upon validation failure, updating the state, and storing the validated blocks. A full node invokes multiple block manager functionalities in parallel, such as:
- Block Production (only for sequencer full nodes)
- Block Publication to DA network
- Block Retrieval from DA network
- Block Sync Service
- Block Publication to P2P network
- Block Retrieval from P2P network
- State Update after Block Retrieval
sequenceDiagram
title Overview of Block Manager
participant User
participant Sequencer
participant Full Node 1
participant Full Node 2
participant DA Layer
User->>Sequencer: Send Tx
Sequencer->>Sequencer: Generate Block
Sequencer->>DA Layer: Publish Block
Sequencer->>Full Node 1: Gossip Block
Sequencer->>Full Node 2: Gossip Block
Full Node 1->>Full Node 1: Verify Block
Full Node 1->>Full Node 2: Gossip Block
Full Node 1->>Full Node 1: Mark Block Soft Confirmed
Full Node 2->>Full Node 2: Verify Block
Full Node 2->>Full Node 2: Mark Block Soft Confirmed
DA Layer->>Full Node 1: Retrieve Block
Full Node 1->>Full Node 1: Mark Block DA Included
DA Layer->>Full Node 2: Retrieve Block
Full Node 2->>Full Node 2: Mark Block DA Included
Protocol/Component Description
The block manager is initialized using several parameters as defined below:
Name | Type | Description |
---|---|---|
signing key | crypto.PrivKey | used for signing a block after it is created |
config | config.BlockManagerConfig | block manager configurations (see config options below) |
genesis | *cmtypes.GenesisDoc | initialize the block manager with genesis state (genesis configuration defined in config/genesis.json file under the app directory) |
store | store.Store | local datastore for storing rollup blocks and states (default local store path is $db_dir/rollkit and db_dir specified in the config.toml file under the app directory) |
mempool, proxyapp, eventbus | mempool.Mempool, proxy.AppConnConsensus, *cmtypes.EventBus | for initializing the executor (state transition function). mempool is also used in the manager to check for availability of transactions for lazy block production |
dalc | da.DAClient | the data availability light client used to submit and retrieve blocks to DA network |
blockstore | *goheaderstore.Store[*types.Block] | to retrieve blocks gossiped over the P2P network |
Block manager configuration options:
Name | Type | Description |
---|---|---|
BlockTime | time.Duration | time interval used for block production and block retrieval from block store (defaultBlockTime ) |
DABlockTime | time.Duration | time interval used for both block publication to DA network and block retrieval from DA network (defaultDABlockTime ) |
DAStartHeight | uint64 | block retrieval from DA network starts from this height |
LazyBlockTime | time.Duration | time interval used for block production in lazy aggregator mode even when there are no transactions (defaultLazyBlockTime ) |
Block Production
When the full node is operating as a sequencer (aka aggregator), the block manager runs the block production logic. There are two modes of block production, which can be specified in the block manager configurations: normal
and lazy
.
In normal
mode, the block manager runs a timer, which is set to the BlockTime
configuration parameter, and continuously produces blocks at BlockTime
intervals.
In lazy
mode, the block manager starts building a block when any transaction becomes available in the mempool. After the first notification of the transaction availability, the manager will wait for a 1 second timer to finish, in order to collect as many transactions from the mempool as possible. The 1 second delay is chosen in accordance with the default block time of 1s. The block manager also notifies the full node after every lazy block building.
Building the Block
The block manager of the sequencer nodes performs the following steps to produce a block:
- Call
CreateBlock
using executor - Sign the block using
signing key
to generate commitment - Call
ApplyBlock
using executor to generate an updated state - Save the block, validators, and updated state to local store
- Add the newly generated block to
pendingBlocks
queue - Publish the newly generated block to channels to notify other components of the sequencer node (such as block and header gossip)
Block Publication to DA Network
The block manager of the sequencer full nodes regularly publishes the produced blocks (that are pending in the pendingBlocks
queue) to the DA network using the DABlockTime
configuration parameter defined in the block manager config. In the event of failure to publish the block to the DA network, the manager will perform maxSubmitAttempts
attempts and an exponential backoff interval between the attempts. The exponential backoff interval starts off at initialBackoff
and it doubles in the next attempt and capped at DABlockTime
. A successful publish event leads to the emptying of pendingBlocks
queue and a failure event leads to proper error reporting without emptying of pendingBlocks
queue.
Block Retrieval from DA Network
The block manager of the full nodes regularly pulls blocks from the DA network at DABlockTime
intervals and starts off with a DA height read from the last state stored in the local store or DAStartHeight
configuration parameter, whichever is the latest. The block manager also actively maintains and increments the daHeight
counter after every DA pull. The pull happens by making the RetrieveBlocks(daHeight)
request using the Data Availability Light Client (DALC) retriever, which can return either Success
, NotFound
, or Error
. In the event of an error, a retry logic kicks in after a delay of 100 milliseconds delay between every retry and after 10 retries, an error is logged and the daHeight
counter is not incremented, which basically results in the intentional stalling of the block retrieval logic. In the block NotFound
scenario, there is no error as it is acceptable to have no rollup block at every DA height. The retrieval successfully increments the daHeight
counter in this case. Finally, for the Success
scenario, first, blocks that are successfully retrieved are marked as DA included and are sent to be applied (or state update). A successful state update triggers fresh DA and block store pulls without respecting the DABlockTime
and BlockTime
intervals.
Out-of-Order Rollup Blocks on DA
Rollkit should support blocks arriving out-of-order on DA, like so:
Termination Condition
If the sequencer double-signs two blocks at the same height, evidence of the fault should be posted to DA. Rollkit full nodes should process the longest valid chain up to the height of the fault evidence, and terminate. See diagram:
Block Sync Service
The block sync service is created during full node initialization. After that, during the block manager's initialization, a pointer to the block store inside the block sync service is passed to it. Blocks created in the block manager are then passed to the BlockCh
channel and then sent to the go-header service to be gossiped blocks over the P2P network.
Block Publication to P2P network
Blocks created by the sequencer that are ready to be published to the P2P network are sent to the BlockCh
channel in Block Manager inside publishLoop
.
The blockPublishLoop
in the full node continuously listens for new blocks from the BlockCh
channel and when a new block is received, it is written to the block store and broadcasted to the network using the block sync service.
Among non-sequencer full nodes, all the block gossiping is handled by the block sync service, and they do not need to publish blocks to the P2P network using any of the block manager components.
Block Retrieval from P2P network
For non-sequencer full nodes, Blocks gossiped through the P2P network are retrieved from the Block Store
in BlockStoreRetrieveLoop
in Block Manager.
Starting off with a block store height of zero, for every blockTime
unit of time, a signal is sent to the blockStoreCh
channel in the block manager and when this signal is received, the BlockStoreRetrieveLoop
retrieves blocks from the block store.
It keeps track of the last retrieved block's height and every time the current block store's height is greater than the last retrieved block's height, it retrieves all blocks from the block store that are between these two heights.
For each retrieved block, it sends a new block event to the blockInCh
channel which is the same channel in which blocks retrieved from the DA layer are sent.
This block is marked as soft confirmed by the validating full node until the same block is seen on the DA layer and then marked DA-included.
Although a sequencer does not need to retrieve blocks from the P2P network, it still runs the BlockStoreRetrieveLoop
.
About Soft Confirmations and DA Inclusions
The block manager retrieves blocks from both the P2P network and the underlying DA network because the blocks are available in the P2P network faster and DA retrieval is slower (e.g., 1 second vs 15 seconds). The blocks retrieved from the P2P network are only marked as soft confirmed until the DA retrieval succeeds on those blocks and they are marked DA included. DA included blocks can be considered to have a higher level of finality.
State Update after Block Retrieval
The block manager stores and applies the block to update its state every time a new block is retrieved either via the P2P or DA network. State update involves:
ApplyBlock
using executor: validates the block, executes the block (applies the transactions), captures the validator updates, and creates an updated state.Commit
using executor: commit the execution and changes, update mempool, and publish events- Store the block, the validators, and the updated state.
Message Structure/Communication Format
The communication between the block manager and executor:
InitChain
: using the genesis, a set of parameters, and validator set to invokeInitChainSync
on the proxyApp to obtain initialappHash
and initialize the state.Commit
: commit the execution and changes, update mempool, and publish events.CreateBlock
: prepare a block by polling transactions from mempool.ApplyBlock
: validate the block, execute the block (apply transactions), validator updates, create and return updated state
The communication between the full node and block manager:
- Notify when the block is published
- Notify when the block is done lazy building
Assumptions and Considerations
- The block manager loads the initial state from the local store and uses genesis if not found in the local store, when the node (re)starts.
- The default mode for sequencer nodes is normal (not lazy).
- The sequencer can produce empty blocks.
- The block manager uses persistent storage (disk) when the
root_dir
anddb_path
configuration parameters are specified inconfig.toml
file under the app directory. If these configuration parameters are not specified, the in-memory storage is used, which will not be persistent if the node stops. - The block manager does not re-apply the block again (in other words, create a new updated state and persist it) when a block was initially applied using P2P block sync, but later was DA included during DA retrieval. The block is only marked DA included in this case.
- The block sync store is created by prefixing
blockSync
on the main data store. - The genesis
ChainID
is used to create thePubSubTopID
in go-header with the string-block
appended to it. This append is because the full node also has a P2P header sync running with a different P2P network. Refer to go-header specs for more details. - Block sync over the P2P network works only when a full node is connected to the P2P network by specifying the initial seeds to connect to via
P2PConfig.Seeds
configuration parameter when starting the full node. - Node's context is passed down to all the components of the P2P block sync to control shutting down the service either abruptly (in case of failure) or gracefully (during successful scenarios).
Implementation
See block-manager
See tutorial for running a multi-node network with both sequencer and non-sequencer full nodes.
References
[1] Go Header
[2] Block Sync
[3] Full Node
[4] Block Manager
[5] Tutorial
Block Executor
Abstract
The BlockExecutor
is a component responsible for creating, applying, and maintaining blocks and state in the system. It interacts with the mempool and the application via the ABCI interface.
Detailed Description
The BlockExecutor
is initialized with a proposer address, namespace ID
, chain ID
, mempool
, proxyApp
, eventBus
, and logger
. It uses these to manage the creation and application of blocks. It also validates blocks and commits them, updating the state as necessary.
-
NewBlockExecutor
: This method creates a new instance ofBlockExecutor
. It takes a proposer address,namespace ID
,chain ID
,mempool
,proxyApp
,eventBus
, andlogger
as parameters. See block manager for details. -
InitChain
: This method initializes the chain by calling ABCIInitChainSync
using the consensus connection to the app. It takes aGenesisDoc
as a parameter. It sends a ABCIRequestInitChain
message with the genesis parameters including:- Genesis Time
- Chain ID
- Consensus Parameters including:
- Block Max Bytes
- Block Max Gas
- Evidence Parameters
- Validator Parameters
- Version Parameters
- Initial Validator Set using genesis validators
- Initial Height
-
CreateBlock
: This method reaps transactions from the mempool and builds a block. It takes the state, the height of the block, last header hash, and the signature as parameters. -
ApplyBlock
: This method applies the block to the state. Given the current state and block to be applied, it:-
Validates the block, as described in
Validate
. -
Executes the block using app, as described in
execute
. -
Captures the validator updates done in the execute block.
-
Updates the state using the block, block execution responses, and validator updates as described in
updateState
. -
Returns the updated state, validator updates and errors, if any, after applying the block.
-
It can return the following named errors:
ErrEmptyValSetGenerate
: returned when applying the validator changes would result in empty set.ErrAddingValidatorToBased
: returned when adding validators to empty validator set.
-
-
Validate
: This method validates the block. It takes the state and the block as parameters. In addition to the basic block validation rules, it applies the following validations:- New block version must match block version of the state.
- If state is at genesis, new block height must match initial height of the state.
- New block height must be last block height + 1 of the state.
- New block header
AppHash
must match stateAppHash
. - New block header
LastResultsHash
must match stateLastResultsHash
. - New block header
AggregatorsHash
must match stateValidators.Hash()
.
-
Commit
: This method commits the block and updates the mempool. Given the updated state, the block, and the ABCIResponseFinalizeBlock
as parameters, it:- Invokes app commit, basically finalizing the last execution, by calling ABCI
Commit
. - Updates the mempool to inform that the transactions included in the block can be safely discarded.
- Publishes the events produced during the block execution for indexing.
- Invokes app commit, basically finalizing the last execution, by calling ABCI
-
updateState
: This method updates the state. Given the current state, the block, the ABCIResponseFinalizeBlock
and the validator updates, it validates the updated validator set, updates the state by applying the block and returns the updated state and errors, if any. The state consists of:- Version
- Chain ID
- Initial Height
- Last Block including:
- Block Height
- Block Time
- Block ID
- Next Validator Set
- Current Validator Set
- Last Validators
- Whether Last Height Validators changed
- Consensus Parameters
- Whether Last Height Consensus Parameters changed
- App Hash
-
execute
: This method executes the block. It takes the context, the state, and the block as parameters. It calls the ABCI methodFinalizeBlock
with the ABCIRequestFinalizeBlock
containing the block hash, ABCI header, commit, transactions and returns the ABCIResponseFinalizeBlock
and errors, if any. -
publishEvents
: This method publishes events related to the block. It takes the ABCIResponseFinalizeBlock
, the block, and the state as parameters.
Message Structure/Communication Format
The BlockExecutor
communicates with the application via the ABCI interface. It calls the ABCI methods InitChainSync
, FinalizeBlock
, Commit
for initializing a new chain and creating blocks, respectively.
Assumptions and Considerations
The BlockExecutor
assumes that there is consensus connection available to the app, which can be used to send and receive ABCI messages. In addition there are some important pre-condition and post-condition invariants, as follows:
-
InitChain
:- pre-condition:
- state is at genesis.
- post-condition:
- new chain is initialized.
- pre-condition:
-
CreateBlock
:- pre-condition:
- chain is initialized
- post-condition:
- new block is created
- pre-condition:
-
ApplyBlock
:- pre-condition:
- block is valid, using basic block validation rules as well as validations performed in
Validate
, as described above.
- block is valid, using basic block validation rules as well as validations performed in
- post-condition:
- block is added to the chain, state is updated and block execution responses are captured.
- pre-condition:
-
Commit
:- pre-condition:
- block has been applied
- post-condition:
- block is committed
- mempool is cleared of block transactions
- block events are published
- state App Hash is updated with the result from
ResponseCommit
- pre-condition:
Implementation
See block executor
References
[1] Block Executor
[2] Block Manager
[3] Block Validation
Block and Header Validity
Abstract
Like all blockchains, rollups are defined as the chain of valid blocks from the genesis, to the head. Thus, the block and header validity rules define the chain.
Verifying a block/header is done in 3 parts:
-
Verify correct serialization according to the protobuf spec
-
Perform basic validation of the types
-
Perform verification of the new block against the previously accepted block
Basic Validation
Each type contains a .ValidateBasic()
method, which verifies that certain basic invariants hold. The ValidateBasic()
calls are nested, starting from the Block
struct, all the way down to each subfield.
The nested basic validation, and validation checks, are called as follows:
Block.ValidateBasic()
// Make sure the block's SignedHeader passes basic validation
SignedHeader.ValidateBasic()
// Make sure the SignedHeader's Header passes basic validation
Header.ValidateBasic()
verify ProposerAddress not nil
// Make sure the SignedHeader's signature passes basic validation
Signature.ValidateBasic()
// Ensure that someone signed the block
verify len(c.Signatures) not 0
If sh.Validators is nil, or len(sh.Validators.Validators) is 0, assume based rollup, pass validation, and skip all remaining checks.
Validators.ValidateBasic()
// github.com/rollkit/cometbft/blob/main/types/validator.go#L37
verify sh.Validators is not nil, and len(sh.Validators.Validators) != 0
// apply basic validation to all Validators
for each validator:
validator.ValidateBasic()
validate not nil
validator.PubKey not nil
validator.VotingPower >= 0
validator.Address == correct size
// apply ValidateBasic to the proposer field:
sh.Validators.Proposer.ValidateBasic()
validate not nil
validator.PubKey not nil
validator.VotingPower >= 0
validator.Address == correct size
Assert that SignedHeader.Validators.Hash() == SignedHeader.AggregatorsHash
Verify SignedHeader.Signature
Data.ValidateBasic() // always passes
// make sure the SignedHeader's DataHash is equal to the hash of the actual data in the block.
Data.Hash() == SignedHeader.DataHash
Verification Against Previous Block
// code does not match spec: see https://github.com/rollkit/rollkit/issues/1277
Block.Verify()
SignedHeader.Verify(untrustH *SignedHeader)
// basic validation removed in #1231, because go-header already validates it
//untrustH.ValidateBasic()
Header.Verify(untrustH *SignedHeader)
if untrustH.Height == h.Height + 1, then apply the following check:
untrstH.AggregatorsHash[:], h.NextAggregatorsHash[:]
if untrustH.Height > h.Height + 1:
soft verification failure
// We should know they're adjacent now,
// verify the link to previous.
untrustH.LastHeaderHash == h.Header.Hash()
// Verify LastCommit hash
untrustH.LastCommitHash == sh.Signature.GetCommitHash(...)
Block
Field Name | Valid State | Validation |
---|---|---|
SignedHeader | Header of the block, signed by proposer | (See SignedHeader) |
Data | Transaction data of the block | Data.Hash == SignedHeader.DataHash |
SignedHeader
Field Name | Valid State | Validation |
---|---|---|
Header | Valid header for the block | Header passes ValidateBasic() and Verify() |
Signature | 1 valid signature from the centralized sequencer | Signature passes ValidateBasic() , with additional checks in SignedHeader.ValidateBasic() |
Validators | Array of Aggregators, must have length exactly 1. | Validators passes ValidateBasic() |
Header
Note: The AggregatorsHash
and NextAggregatorsHash
fields have been removed. Rollkit vA should ignore all Valset updates from the ABCI app, and always enforce that the proposer is the centralized sequencer set as the 1 validator in the genesis block.
Field Name | Valid State | Validation |
---|---|---|
BaseHeader . | ||
Height | Height of the previous accepted header, plus 1. | checked in the `Verify()`` step |
Time | Timestamp of the block | Not validated in Rollkit |
ChainID | The hard-coded ChainID of the chain | Should be checked as soon as the header is received |
Header . | ||
Version | unused | |
LastHeaderHash | The hash of the previous accepted block | checked in the `Verify()`` step |
LastCommitHash | The hash of the previous accepted block's commit | checked in the `Verify()`` step |
DataHash | Correct hash of the block's Data field | checked in the `ValidateBasic()`` step |
ConsensusHash | unused | |
AppHash | The correct state root after executing the block's transactions against the accepted state | checked during block execution |
LastResultsHash | Correct results from executing transactions | checked during block execution |
ProposerAddress | Address of the expected proposer | checked in the Verify() step |
Signature | Signature of the expected proposer | signature verification occurs in the ValidateBasic() step |
ValidatorSet
Field Name | Valid State | Validation |
---|---|---|
Validators | Array of validators, each must pass Validator.ValidateBasic() | Validator.ValidateBasic() |
Proposer | Must pass Validator.ValidateBasic() | Validator.ValidateBasic() |
DA
Rollkit provides a wrapper for go-da, a generic data availability interface for modular blockchains, called DAClient
with wrapper functionalities like SubmitBlocks
and RetrieveBlocks
to help block manager interact with DA more easily.
Details
DAClient
can connect via either gRPC or JSON-RPC transports using the go-da proxy/grpc or proxy/jsonrpc implementations. The connection can be configured using the following cli flags:
--rollkit.da_address
: url address of the DA service (default: "grpc://localhost:26650")--rollkit.da_auth_token
: authentication token of the DA service--rollkit.da_namespace
: namespace to use when submitting blobs to the DA service
Given a set of blocks to be submitted to DA by the block manager, the SubmitBlocks
first encodes the blocks using protobuf (the encoded data are called blobs) and invokes the Submit
method on the underlying DA implementation. On successful submission (StatusSuccess
), the DA block height which included in the rollup blocks is returned.
To make sure that the serialised blocks don't exceed the underlying DA's blob limits, it fetches the blob size limit by calling Config
which returns the limit as uint64
bytes, then includes serialised blocks until the limit is reached. If the limit is reached, it submits the partial set and returns the count of successfully submitted blocks as SubmittedCount
. The caller should retry with the remaining blocks until all the blocks are submitted. If the first block itself is over the limit, it throws an error.
The Submit
call may result in an error (StatusError
) based on the underlying DA implementations on following scenarios:
- the total blobs size exceeds the underlying DA's limits (includes empty blobs)
- the implementation specific failures, e.g., for celestia-da, invalid namespace, unable to create the commitment or proof, setting low gas price, etc, could return error.
The RetrieveBlocks
retrieves the rollup blocks for a given DA height using go-da GetIDs
and Get
methods. If there are no blocks available for a given DA height, StatusNotFound
is returned (which is not an error case). The retrieved blobs are converted back to rollup blocks and returned on successful retrieval.
Both SubmitBlocks
and RetrieveBlocks
may be unsuccessful if the DA node and the DA blockchain that the DA implementation is using have failures. For example, failures such as, DA mempool is full, DA submit transaction is nonce clashing with other transaction from the DA submitter account, DA node is not synced, etc.
Implementation
References
[1] go-da
[2] celestia-da
[3] proxy/grpc
[4] proxy/jsonrpc
Full Node
Abstract
A Full Node is a top-level service that encapsulates different components of Rollkit and initializes/manages them.
Details
Full Node Details
A Full Node is initialized inside the Cosmos SDK start script along with the node configuration, a private key to use in the P2P client, a private key for signing blocks as a block proposer, a client creator, a genesis document, and a logger. It uses them to initialize the components described above. The components TxIndexer, BlockIndexer, and IndexerService exist to ensure cometBFT compatibility since they are needed for most of the RPC calls from the SignClient
interface from cometBFT.
Note that unlike a light node which only syncs and stores block headers seen on the P2P layer, the full node also syncs and stores full blocks seen on both the P2P network and the DA layer. Full blocks contain all the transactions published as part of the block.
The Full Node mainly encapsulates and initializes/manages the following components:
proxyApp
The Cosmos SDK start script passes a client creator constructed using the relevant Cosmos SDK application to the Full Node's constructor which is then used to create the proxy app interface. When the proxy app is started, it establishes different ABCI app connections including Mempool, Consensus, Query, and Snapshot. The full node uses this interface to interact with the application.
genesisDoc
The genesis document contains information about the initial state of the rollup chain, in particular its validator set.
conf
The node configuration contains all the necessary settings for the node to be initialized and function properly.
P2P
The peer-to-peer client is used to gossip transactions between full nodes in the network.
Mempool
The Mempool is the transaction pool where all the transactions are stored before they are added to a block.
Store
The Store is initialized with DefaultStore
, an implementation of the store interface which is used for storing and retrieving blocks, commits, and state. |
blockManager
The Block Manager is responsible for managing the operations related to blocks such as creating and validating blocks.
dalc
The Data Availability Layer Client is used to interact with the data availability layer. It is initialized with the DA Layer and DA Config specified in the node configuration.
hExService
The Header Sync Service is used for syncing block headers between nodes over P2P.
bSyncService
The Block Sync Service is used for syncing blocks between nodes over P2P.
Message Structure/Communication Format
The Full Node communicates with other nodes in the network using the P2P client. It also communicates with the application using the ABCI proxy connections. The communication format is based on the P2P and ABCI protocols.
Assumptions and Considerations
The Full Node assumes that the configuration, private keys, client creator, genesis document, and logger are correctly passed in by the Cosmos SDK. It also assumes that the P2P client, data availability layer client, mempool, block manager, and other services can be started and stopped without errors.
Implementation
See full node
References
[1] Full Node
[2] ABCI Methods
[3] Genesis Document
[6] Mempool
[7] Store
[8] Store Interface
[9] Block Manager
[10] Data Availability Layer Client
[11] Header Sync Service
[12] Block Sync Service
Header Sync
Abstract
The nodes in the P2P network sync headers using the header sync service that implements the go-header interface. The header sync service consists of several components as listed below.
Component | Description |
---|---|
store | a headerEx prefixed datastore where synced headers are stored |
subscriber | a libp2p node pubsub subscriber |
P2P server | a server for handling header requests between peers in the P2P network |
exchange | a client that enables sending in/out-bound header requests from/to the P2P network |
syncer | a service for efficient synchronization for headers. When a P2P node falls behind and wants to catch up to the latest network head via P2P network, it can use the syncer. |
Details
All three types of nodes (sequencer, full, and light) run the header sync service to maintain the canonical view of the rollup chain (with respect to the P2P network).
The header sync service inherits the ConnectionGater
from the node's P2P client which enables blocking and allowing peers as needed by specifying the P2PConfig.BlockedPeers
and P2PConfig.AllowedPeers
.
NodeConfig.BlockTime
is used to configure the syncer such that it can effectively decide the outdated headers while it receives headers from the P2P network.
Both header and block sync utilizes go-header library and runs two separate sync services, for the headers and blocks. This distinction is mainly to serve light nodes which do not store blocks, but only headers synced from the P2P network.
Consumption of Header Sync
The sequencer node, upon successfully creating the block, publishes the signed block header to the P2P network using the header sync service. The full/light nodes run the header sync service in the background to receive and store the signed headers from the P2P network. Currently the full/light nodes do not consume the P2P synced headers, however they have future utilities in performing certain checks.
Assumptions
- The header sync store is created by prefixing
headerSync
the main datastore. - The genesis
ChainID
is used to create thePubsubTopicID
in go-header. For example, for ChainIDgm
, the pubsub topic id is/gm/header-sub/v0.0.1
. Refer to go-header specs for further details. - The header store must be initialized with genesis header before starting the syncer service. The genesis header can be loaded by passing the genesis header hash via
NodeConfig.TrustedHash
configuration parameter or by querying the P2P network. This imposes a time constraint that full/light nodes have to wait for the sequencer to publish the genesis header to the P2P network before starting the header sync service. - The Header Sync works only when the node is connected to the P2P network by specifying the initial seeds to connect to via the
P2PConfig.Seeds
configuration parameter. - The node's context is passed down to all the components of the P2P header sync to control shutting down the service either abruptly (in case of failure) or gracefully (during successful scenarios).
Implementation
The header sync implementation can be found in block/sync_service.go. The full and light nodes create and start the header sync service under full and light.
References
[1] Header Sync
[2] Full Node
[3] Light Node
[4] go-header
Indexer Service
Abstract
The Indexer service indexes transactions and blocks using events emitted by the block executor that can later be used to build services like block explorers that ingest this data.
Protocol/Component Description
The Indexer service is started and stopped along with a Full Node. It consists of three main components: an event bus, a transaction indexer and a block indexer.
Event Bus
The event bus is a messaging service between the full node and the indexer service. It serves events emitted by the block executor of the full node to the indexer service where events are routed to the relevant indexer component based on type.
Block Indexer
The Block Indexer indexes BeginBlock and EndBlock events with an underlying KV store. Block events are indexed by their height, such that matching search criteria returns the respective block height(s).
Transaction Indexer
The Transaction Indexer is a key-value store-backed indexer that provides functionalities for indexing and searching transactions. It allows for the addition of a batch of transactions, indexing and storing a single transaction, retrieving a transaction specified by hash, and querying for transactions based on specific conditions. The indexer also supports range queries and can return results based on the intersection of multiple conditions.
Message Structure/Communication Format
The publishEvents
method in the block executor is responsible for broadcasting several types of events through the event bus. These events include EventNewBlock
, EventNewBlockHeader
, EventNewBlockEvents
, EventNewEvidence
, and EventTx
. Each of these events carries specific data related to the block or transaction they represent.
-
EventNewBlock
: Triggered when a new block is finalized. It carries the block data along with the results of theFinalizeBlock
ABCI method. -
EventNewBlockHeader
: Triggered alongside theEventNewBlock
event. It carries the block header data. -
EventNewBlockEvents
: Triggered when a new block is finalized. It carries the block height, the events associated with the block, and the number of transactions in the block. -
EventNewEvidence
: Triggered for each piece of evidence in the block. It carries the evidence and the height of the block. -
EventTx
: Triggered for each transaction in the block. It carries the result of theDeliverTx
ABCI method for the transaction.
The OnStart
method in indexer_service.go
subscribes to these events. It listens for new blocks and transactions, and upon receiving these events, it indexes the transactions and blocks accordingly. The block indexer indexes EventNewBlockEvents
, while the transaction indexer indexes the events inside EventTx
. The events, EventNewBlock
, EventNewBlockHeader
, and EventNewEvidence
are not currently used by the indexer service.
Assumptions and Considerations
The indexer service assumes that the messages passed by the block executor are valid block headers and valid transactions with the required fields such that they can be indexed by the respective block indexer and transaction indexer.
Implementation
See indexer service
References
[1] Block Indexer
[3] Publish Events
[4] Indexer Service
Mempool
Abstract
The mempool module stores transactions which have not yet been included in a block, and provides an interface to check the validity of incoming transactions. It's defined by an interface here, with an implementation here.
Component Description
Full nodes instantiate a mempool here. A p2p.GossipValidator
is constructed from the node's mempool here, which is used by Rollkit's P2P code to deal with peers who gossip invalid transactions. The mempool is also passed into the block manager constructor, which creates a BlockExecutor
from the mempool.
The BlockExecutor
calls ReapMaxBytesMaxGas
in CreateBlock
to get transactions from the pool for the new block. When commit
is called, the BlockExecutor
calls Update(...)
on the mempool, removing the old transactions from the pool.
Communication
Several RPC methods query the mempool module: BroadcastTxCommit
, BroadcastTxAsync
, BroadcastTxSync
call the mempool's CheckTx(...)
method.
Interface
Function Name | Input Arguments | Output Type | Intended Behavior |
---|---|---|---|
CheckTx | tx types.Tx, callback func(*abci.Response), txInfo TxInfo | error | Executes a new transaction against the application to determine its validity and whether it should be added to the mempool. |
RemoveTxByKey | txKey types.TxKey | error | Removes a transaction, identified by its key, from the mempool. |
ReapMaxBytesMaxGas | maxBytes, maxGas int64 | types.Txs | Reaps transactions from the mempool up to maxBytes bytes total with the condition that the total gasWanted must be less than maxGas. If both maxes are negative, there is no cap on the size of all returned transactions (~ all available transactions). |
ReapMaxTxs | max int | types.Txs | Reaps up to max transactions from the mempool. If max is negative, there is no cap on the size of all returned transactions (~ all available transactions). |
Lock | N/A | N/A | Locks the mempool. The consensus must be able to hold the lock to safely update. |
Unlock | N/A | N/A | Unlocks the mempool. |
Update | blockHeight uint64, blockTxs types.Txs, deliverTxResponses []*abci.ResponseDeliverTx, newPreFn PreCheckFunc, newPostFn PostCheckFunc | error | Informs the mempool that the given txs were committed and can be discarded. This should be called after block is committed by consensus. Lock/Unlock must be managed by the caller. |
FlushAppConn | N/A | error | Flushes the mempool connection to ensure async callback calls are done, e.g., from CheckTx. Lock/Unlock must be managed by the caller. |
Flush | N/A | N/A | Removes all transactions from the mempool and caches. |
TxsAvailable | N/A | <-chan struct{} | Returns a channel which fires once for every height when transactions are available in the mempool. The returned channel may be nil if EnableTxsAvailable was not called. |
EnableTxsAvailable | N/A | N/A | Initializes the TxsAvailable channel, ensuring it will trigger once every height when transactions are available. |
Size | N/A | int | Returns the number of transactions in the mempool. |
SizeBytes | N/A | int64 | Returns the total size of all txs in the mempool. |
P2P
Every rollup node (both full and light) runs a P2P client using go-libp2p P2P networking stack for gossiping transactions in the rollup's P2P network. The same P2P client is also used by the header and block sync services for gossiping headers and blocks.
Following parameters are required for creating a new instance of a P2P client:
- P2PConfig (described below)
- go-libp2p private key used to create a libp2p connection and join the p2p network.
- chainID: rollup identifier used as namespace within the p2p network for peer discovery. The namespace acts as a sub network in the p2p network, where peer connections are limited to the same namespace.
- datastore: an instance of go-datastore used for creating a connection gator and stores blocked and allowed peers.
- logger
// P2PConfig stores configuration related to peer-to-peer networking.
type P2PConfig struct {
ListenAddress string // Address to listen for incoming connections
Seeds string // Comma separated list of seed nodes to connect to
BlockedPeers string // Comma separated list of nodes to ignore
AllowedPeers string // Comma separated list of nodes to whitelist
}
A P2P client also instantiates a connection gator to block and allow peers specified in the P2PConfig
.
It also sets up a gossiper using the gossip topic <chainID>+<txTopicSuffix>
(txTopicSuffix
is defined in p2p/client.go), a Distributed Hash Table (DHT) using the Seeds
defined in the P2PConfig
and peer discovery using go-libp2p's discovery.RoutingDiscovery
.
A P2P client provides an interface SetTxValidator(p2p.GossipValidator)
for specifying a gossip validator which can define how to handle the incoming GossipMessage
in the P2P network. The GossipMessage
represents message gossiped via P2P network (e.g. transaction, Block etc).
// GossipValidator is a callback function type.
type GossipValidator func(*GossipMessage) bool
The full nodes define a transaction validator (shown below) as gossip validator for processing the gossiped transactions to add to the mempool, whereas light nodes simply pass a dummy validator as light nodes do not process gossiped transactions.
// newTxValidator creates a pubsub validator that uses the node's mempool to check the
// transaction. If the transaction is valid, then it is added to the mempool
func (n *FullNode) newTxValidator() p2p.GossipValidator {
// Dummy validator that always returns a callback function with boolean `false`
func (ln *LightNode) falseValidator() p2p.GossipValidator {
References
[1] client.go
[2] go-datastore
[3] go-libp2p
[4] conngater
Rollkit RPC Equivalency
Abstract
Rollkit RPC is a remote procedure call (RPC) service that provides a set of endpoints for interacting with a Rollkit node. It supports various protocols such as URI over HTTP, JSONRPC over HTTP, and JSONRPC over WebSockets.
Protocol/Component Description
Rollkit RPC serves a variety of endpoints that allow clients to query the state of the blockchain, broadcast transactions, and subscribe to events. The RPC service follows the specifications outlined in the CometBFT specification.
Rollkit RPC Functionality Coverage
Routes | Full Node | Test Coverage |
---|---|---|
Health | ✅ | ❌ |
Status | ✅ | 🚧 |
NetInfo | ✅ | ❌ |
Blockchain | ✅ | 🚧 |
Block | ✅ | 🚧 |
BlockByHash | ✅ | 🚧 |
BlockResults | ✅ | 🚧 |
Commit | ✅ | 🚧 |
Validators | ✅ | 🚧 |
Genesis | ✅ | 🚧 |
GenesisChunked | ✅ | 🚧 |
ConsensusParams | ✅ | 🚧 |
UnconfirmedTxs | ✅ | 🚧 |
NumUnconfirmedTxs | ✅ | 🚧 |
Tx | ✅ | 🚧 |
BroadCastTxSync | ✅ | 🚧 |
BroadCastTxAsync | ✅ | 🚧 |
Message Structure/Communication Format
The communication format depends on the protocol used. For HTTP-based protocols, the request and response are typically structured as JSON objects. For web socket-based protocols, the messages are sent as JSONRPC requests and responses.
Assumptions and Considerations
The RPC service assumes that the Rollkit node it interacts with is running and correctly configured. It also assumes that the client is authorized to perform the requested operations.
Implementation
The implementation of the Rollkit RPC service can be found in the rpc/json/service.go
file in the Rollkit repository.
References
[1] CometBFT RPC Specification [2] RPC Service Implementation
Block Executor
Abstract
The BlockExecutor
is a component responsible for creating, applying, and maintaining blocks and state in the system. It interacts with the mempool and the application via the ABCI interface.
Detailed Description
The BlockExecutor
is initialized with a proposer address, namespace ID
, chain ID
, mempool
, proxyApp
, eventBus
, and logger
. It uses these to manage the creation and application of blocks. It also validates blocks and commits them, updating the state as necessary.
-
NewBlockExecutor
: This method creates a new instance ofBlockExecutor
. It takes a proposer address,namespace ID
,chain ID
,mempool
,proxyApp
,eventBus
, andlogger
as parameters. See block manager for details. -
InitChain
: This method initializes the chain by calling ABCIInitChainSync
using the consensus connection to the app. It takes aGenesisDoc
as a parameter. It sends a ABCIRequestInitChain
message with the genesis parameters including:- Genesis Time
- Chain ID
- Consensus Parameters including:
- Block Max Bytes
- Block Max Gas
- Evidence Parameters
- Validator Parameters
- Version Parameters
- Initial Validator Set using genesis validators
- Initial Height
-
CreateBlock
: This method reaps transactions from the mempool and builds a block. It takes the state, the height of the block, last header hash, and the signature as parameters. -
ApplyBlock
: This method applies the block to the state. Given the current state and block to be applied, it:-
Validates the block, as described in
Validate
. -
Executes the block using app, as described in
execute
. -
Captures the validator updates done in the execute block.
-
Updates the state using the block, block execution responses, and validator updates as described in
updateState
. -
Returns the updated state, validator updates and errors, if any, after applying the block.
-
It can return the following named errors:
ErrEmptyValSetGenerate
: returned when applying the validator changes would result in empty set.ErrAddingValidatorToBased
: returned when adding validators to empty validator set.
-
-
Validate
: This method validates the block. It takes the state and the block as parameters. In addition to the basic block validation rules, it applies the following validations:- New block version must match block version of the state.
- If state is at genesis, new block height must match initial height of the state.
- New block height must be last block height + 1 of the state.
- New block header
AppHash
must match stateAppHash
. - New block header
LastResultsHash
must match stateLastResultsHash
. - New block header
AggregatorsHash
must match stateValidators.Hash()
.
-
Commit
: This method commits the block and updates the mempool. Given the updated state, the block, and the ABCIResponseFinalizeBlock
as parameters, it:- Invokes app commit, basically finalizing the last execution, by calling ABCI
Commit
. - Updates the mempool to inform that the transactions included in the block can be safely discarded.
- Publishes the events produced during the block execution for indexing.
- Invokes app commit, basically finalizing the last execution, by calling ABCI
-
updateState
: This method updates the state. Given the current state, the block, the ABCIResponseFinalizeBlock
and the validator updates, it validates the updated validator set, updates the state by applying the block and returns the updated state and errors, if any. The state consists of:- Version
- Chain ID
- Initial Height
- Last Block including:
- Block Height
- Block Time
- Block ID
- Next Validator Set
- Current Validator Set
- Last Validators
- Whether Last Height Validators changed
- Consensus Parameters
- Whether Last Height Consensus Parameters changed
- App Hash
-
execute
: This method executes the block. It takes the context, the state, and the block as parameters. It calls the ABCI methodFinalizeBlock
with the ABCIRequestFinalizeBlock
containing the block hash, ABCI header, commit, transactions and returns the ABCIResponseFinalizeBlock
and errors, if any. -
publishEvents
: This method publishes events related to the block. It takes the ABCIResponseFinalizeBlock
, the block, and the state as parameters.
Message Structure/Communication Format
The BlockExecutor
communicates with the application via the ABCI interface. It calls the ABCI methods InitChainSync
, FinalizeBlock
, Commit
for initializing a new chain and creating blocks, respectively.
Assumptions and Considerations
The BlockExecutor
assumes that there is consensus connection available to the app, which can be used to send and receive ABCI messages. In addition there are some important pre-condition and post-condition invariants, as follows:
-
InitChain
:- pre-condition:
- state is at genesis.
- post-condition:
- new chain is initialized.
- pre-condition:
-
CreateBlock
:- pre-condition:
- chain is initialized
- post-condition:
- new block is created
- pre-condition:
-
ApplyBlock
:- pre-condition:
- block is valid, using basic block validation rules as well as validations performed in
Validate
, as described above.
- block is valid, using basic block validation rules as well as validations performed in
- post-condition:
- block is added to the chain, state is updated and block execution responses are captured.
- pre-condition:
-
Commit
:- pre-condition:
- block has been applied
- post-condition:
- block is committed
- mempool is cleared of block transactions
- block events are published
- state App Hash is updated with the result from
ResponseCommit
- pre-condition:
Implementation
See block executor
References
[1] Block Executor
[2] Block Manager
[3] Block Validation
Store
Abstract
The Store interface defines methods for storing and retrieving blocks, commits, and the state of the blockchain.
Protocol/Component Description
The Store interface defines the following methods:
Height
: Returns the height of the highest block in the store.SetHeight
: Sets given height in the store if it's higher than the existing height in the store.SaveBlock
: Saves a block along with its seen signature.GetBlock
: Returns a block at a given height.GetBlockByHash
: Returns a block with a given block header hash.SaveBlockResponses
: Saves block responses in the Store.GetBlockResponses
: Returns block results at a given height.GetSignature
: Returns a signature for a block at a given height.GetSignatureByHash
: Returns a signature for a block with a given block header hash.UpdateState
: Updates the state saved in the Store. Only one State is stored.GetState
: Returns the last state saved with UpdateState.SaveValidators
: Saves the validator set at a given height.GetValidators
: Returns the validator set at a given height.
The TxnDatastore
interface inside go-datastore is used for constructing different key-value stores for the underlying storage of a full node. The are two different implementations of TxnDatastore
in kv.go:
-
NewDefaultInMemoryKVStore
: Builds a key-value store that uses the BadgerDB library and operates in-memory, without accessing the disk. Used only across unit tests and integration tests. -
NewDefaultKVStore
: Builds a key-value store that uses the BadgerDB library and stores the data on disk at the specified path.
A Rollkit full node is initialized using NewDefaultKVStore
as the base key-value store for underlying storage. To store various types of data in this base key-value store, different prefixes are used: mainPrefix
, dalcPrefix
, and indexerPrefix
. The mainPrefix
equal to 0
is used for the main node data, dalcPrefix
equal to 1
is used for Data Availability Layer Client (DALC) data, and indexerPrefix
equal to 2
is used for indexing related data.
For the main node data, DefaultStore
struct, an implementation of the Store interface, is used with the following prefixes for various types of data within it:
blockPrefix
with value "b": Used to store blocks in the key-value store.indexPrefix
with value "i": Used to index the blocks stored in the key-value store.commitPrefix
with value "c": Used to store commits related to the blocks.statePrefix
with value "s": Used to store the state of the blockchain.responsesPrefix
with value "r": Used to store responses related to the blocks.validatorsPrefix
with value "v": Used to store validator sets at a given height.
For example, in a call to GetBlockByHash
for some block hash <block_hash>
, the key used in the full node's base key-value store will be /0/b/<block_hash>
where 0
is the main store prefix and b
is the block prefix. Similarly, in a call to GetValidators
for some height <height>
, the key used in the full node's base key-value store will be /0/v/<height>
where 0
is the main store prefix and v
is the validator set prefix.
Inside the key-value store, the value of these various types of data like Block
is stored as a byte array which is encoded and decoded using the corresponding Protobuf marshal and unmarshal methods.
The store is most widely used inside the block manager and full client to perform their functions correctly. Within the block manager, since it has multiple go-routines in it, it is protected by a mutex lock, lastStateMtx
, to synchronize read/write access to it and prevent race conditions.
Message Structure/Communication Format
The Store does not communicate over the network, so there is no message structure or communication format.
Assumptions and Considerations
The Store assumes that the underlying datastore is reliable and provides atomicity for transactions. It also assumes that the data passed to it for storage is valid and correctly formatted.
Implementation
See Store Interface and Default Store for its implementation.
References
[1] Store Interface
[2] Default Store
[3] Full Node Store Initialization
[4] Block Manager
[5] Full Client
[6] Badger DB
[7] Go Datastore
[8] Key Value Store
[9] Serialization
Sequencer Selection Scheme
Abstract
The Sequencer Selection scheme describes the process of selecting a block proposer i.e. sequencer from the validator set.
In the first version of Rollkit, this is a fixed pubkey, belonging to the centralized sequencer. The validator set may only ever have one "validator", the centralized sequencer.
Protocol/Component Description
There is exactly one sequencer which is configured at genesis. GenesisDoc
usually contains an array of validators as it is imported from CometBFT
. If there is more than one validator defined
in the genesis validator set, an error is thrown.
The Header
struct defines a field called ProposerAddress
which is the pubkey of the original proposer of the block.
The SignedHeader
struct commits over the header and the proposer address and stores the result in LastCommitHash
.
A new untrusted header is verified by checking its ProposerAddress
and matching it against the best-known header. In case of a mismatch, an error is thrown.
Message Structure/Communication Format
The primary structures encompassing validator information include SignedHeader
, Header
, and State
. Some fields are repurposed from CometBFT as seen in GenesisDoc
Validators
.
Assumptions and Considerations
- There must be exactly one validator defined in the genesis file, which determines the sequencer for all the blocks.
Implementation
The implementation is split across multiple functions including IsProposer
, publishBlock
, CreateBlock
, and Verify
among others, which are defined in various files like state.go
, manager.go
, block.go
, header.go
etc., within the repository.
See block manager
References
[1] Block Manager