Skip to content

State Sync & Checkpoints

State sync allows new nodes to join the network without replaying the entire block history. Instead, a joining node downloads a verified state snapshot at a recent checkpoint and replays only the blocks after that point.

The state sync system has three layers:

  1. Checkpoints - periodic state commitments recorded at finalized heights
  2. Snapshots - exportable bundles of headers, MMR entries, and checkpoint metadata
  3. State streaming - a P2P protocol for downloading state entries in paginated chunks
Checkpoint creation (validator)
┌──────────────────┐ ┌──────────────────┐
│ SnapshotManifest │ │ LightSnapshot │
│ (full headers + │ │ (single header + │
│ MMR entries + │ │ MMR proof + │
│ validator sets) │ │ finality proof) │
└────────┬─────────┘ └────────┬─────────┘
│ │
▼ ▼
Full sync path Fast sync path
(export/import) (P2P state streaming)

A checkpoint is a (height, state_root) pair recorded when a block is finalized. The state_root is a Blake3 commitment over the execution state at that height.

Checkpoints are created during block finalization when the block height satisfies the configured interval:

Terminal window
# Configure checkpoint interval (every 1000 blocks)
ashen node run --checkpoint-interval 1000

When no interval is configured, checkpoints are created only during snapshot operations and state sync.

Checkpoints are stored under data/checkpoints/ and managed through the ChainStore trait:

OperationDescription
put_state_checkpointStore a new checkpoint
state_checkpoint(height)Fetch checkpoint at a specific height
latest_state_checkpointGet the most recent checkpoint
list_checkpoints(limit)List checkpoints ordered by height descending

A full snapshot contains everything needed to bootstrap a node from genesis:

FieldDescription
headersBlock headers from genesis to checkpoint (ascending order)
mmr_entriesFinalized MMR entries corresponding to each header
mmr_rootExpected MMR root after all entries are added
checkpointHeight and state root at the snapshot point
validator_setsValidator sets for all epochs in the range
finality_proofBLS threshold certificate for the checkpoint block

Full snapshots use Borsh serialization and can be exported/imported via the export_snapshot and import_snapshot functions.

A light snapshot is a minimal proof bundle for nodes that already have a trusted MMR root (e.g., from a light client):

FieldDescription
checkpointHeight and state root
headerBlock header at the checkpoint height
mmr_proofMMR membership proof for the checkpoint block
finality_proofBLS threshold certificate (optional)
validator_setValidator set for the checkpoint epoch

Light snapshots are verified against a trusted MMR root rather than replaying the full header chain.

New nodes download state entries over the follower sync P2P channel using a request/response protocol.

MessageDirectionPurpose
StateCheckpointRequestNode -> PeerQuery checkpoint metadata at a height
StateCheckpointResponsePeer -> NodeReturns height, state root, total entry count, block hash
StateChunkRequestNode -> PeerRequest a page of state entries
StateChunkResponsePeer -> NodeReturns state entries with pagination cursor
LimitValue
Max headers per response64
Max finality proofs per response64
Max state entries per chunk1,024

The StateSyncSession manages the download lifecycle through five phases:

FetchingCheckpoint -> Downloading -> Importing -> Verifying -> Complete
└─> Failed

Downloading: state entries are fetched in chunks via StateChunkRequest with offset-based pagination. Each chunk is validated against the checkpoint height.

Importing: pending entries are written to the state backend via import_pending().

Verifying: the state root is recomputed from imported entries and compared against the checkpoint’s state_root. A mismatch aborts the sync.

ParameterDefaultDescription
chunk_size512Entries requested per chunk
request_timeout30sTimeout per chunk request
max_concurrent_requests4Parallel chunk downloads
retry_attempts3Retries per failed chunk
retry_delay1sDelay between retries

A new node joining the network follows this sequence:

The node requests block headers from peers via HeaderRequest messages, verifying:

  • Finality proofs (BLS threshold signatures) against known validator sets
  • Chain continuity (each header’s parent_hash matches the previous header’s hash)

The node queries a recent checkpoint:

StateCheckpointRequest { height: None } // latest checkpoint
-> StateCheckpointResponse { height, state_root, total_entries, block_hash }

For verified sync, the node fetches a light snapshot and verifies it against a trusted MMR root:

get_light_snapshot(height)
-> { checkpoint, header, mmr_proof, finality_proof, validators, snapshot_bytes }

The node streams state entries in chunks:

StateChunkRequest { checkpoint_height: 5000, offset: 0, limit: 512 }
-> StateChunkResponse { entries: [...], has_more: true }
StateChunkRequest { checkpoint_height: 5000, offset: 512, limit: 512 }
-> StateChunkResponse { entries: [...], has_more: true }
... (continues until has_more == false)

After all entries are imported, the node recomputes the state root and verifies it matches the checkpoint. On success, the node transitions to normal operation and replays blocks from the checkpoint height to the current tip.

SnapshotManifest::verify() checks:

CheckDescription
VersionSnapshot format version must match current
Header chainEach header’s parent_hash matches the prior header’s hash
MMR rootRecomputed root from entries matches mmr_root
Entry countMMR entry count matches header count
Checkpoint heightMust equal the last header’s height
State rootCheckpoint state_root matches header state_root

LightSnapshot::verify(trusted_root) checks:

CheckDescription
VersionFormat version must match
HeightCheckpoint height matches header height
State rootCheckpoint state_root matches header state_root
MMR proofProof verifies against the trusted MMR root
Block hashMMR proof entry hash matches header hash

A snapshot can be verified against an externally obtained MMR root (e.g., from a light client or a known good peer):

verify_against_trusted_root(manifest, trusted_root)

This runs the full structural verification plus confirms the manifest’s MMR root matches the trusted value.

MethodDescription
get_checkpoint_listList checkpoint heights and state roots
get_checkpoint(height)Get checkpoint descriptor with optional archive info
list_checkpoints(limit)List checkpoints with archive descriptors, newest first
MethodDescription
get_light_snapshot(height)Get minimal verified snapshot for fast sync
get_snapshot_chunk(height, offset, limit)Stream state entries for a checkpoint
import_snapshot_chunk(height, state_root, entries, finalize)Admin: import state entries for bootstrap
MethodDescription
statusIncludes latest_checkpoint in response
finalized_history_rootCurrent MMR root for snapshot verification
finalized_history_proof(height)MMR membership proof for a finalized block
finality_proof(height)BLS threshold certificate for a block
Terminal window
ashen backup export --height 5000 --output snapshot.bin
Terminal window
ashen backup import --input snapshot.bin
Terminal window
# Verify a block is in the finalized history
ashen node verify --height 5000
# Verify against a specific RPC endpoint
ashen node verify --height 5000 --rpc-url http://localhost:3030

Checkpoints are stored under data/checkpoints/ and typically consume 10-50 MB each. See Disk Growth for sizing guidance.

  • Archive mode (max_retained_heights: None): retains all historical state. Useful for RPC nodes serving historical queries.
  • Pruning mode (max_retained_heights: N): retains only the most recent N heights of state. Reduces disk usage but limits which checkpoints can serve state sync requests.

The status RPC endpoint includes latest_checkpoint with height and state root. Monitor this to confirm checkpoints are being created at the expected interval.

# Track checkpoint creation
ashen_latest_checkpoint_height