Skip to content

Disk Growth Guidance

This document provides disk growth estimates and sizing recommendations for testnet and mainnet deployments.

The chain stores data in several components:

ComponentLocationGrowth RateDescription
Blocksdata/blocks/~2-5 KB/blockBlock headers and transaction data
Statedata/state/VariableAccount state, contract storage
Marshal Archivedata/marshal/~1-3 KB/finalizationFinality proofs and attestations
Checkpointsdata/checkpoints/~10-50 MB/checkpointPeriodic state snapshots
block_size = header_size + (tx_count * avg_tx_size)
= ~200 bytes + (N * ~250 bytes)

For a network with:

  • 1 block/second
  • Average 10 txs/block
  • ~2.7 KB/block

Monthly growth: ~7 GB/month raw blocks

Finality proofs include BLS signatures and attestations:

  • ~1-3 KB per finalization
  • With 1 finalization/second: ~2.5-7.5 GB/month

State growth depends heavily on contract activity:

  • Account creation: ~100 bytes/account
  • Contract deployment: ~1-100 KB/contract
  • Storage writes: 32 bytes key + value size
ResourceMinimumRecommended
Disk100 GB SSD250 GB NVMe
Expected 6-month growth~50 GB~50 GB
Headroom50 GB200 GB
ResourceMinimumRecommended
Disk500 GB NVMe1 TB NVMe
Expected 1-year growth~200 GB~200 GB
Headroom300 GB800 GB

Archive nodes retain all historical state:

ResourceMinimumRecommended
Disk2 TB NVMe4 TB NVMe
Expected 1-year growth~500 GB - 1 TB~500 GB - 1 TB

Monitor these Prometheus metrics to track disk usage:

# Block height (track growth rate)
rate(ashen_block_height[1h]) * 3600
# Estimated blocks per day
rate(ashen_block_height[1d]) * 86400
# Transaction throughput
rate(ashen_block_transaction_count[5m])

The included dashboard (ops/grafana/ashen-dashboard.json) provides:

  • Block height and finalized height
  • Transaction throughput
  • RPC request rates
  • Block production timing
  • Txpool size

Recommended Prometheus alert rules:

groups:
- name: disk-alerts
rules:
- alert: DiskSpaceLow
expr: node_filesystem_avail_bytes{mountpoint="/data"} / node_filesystem_size_bytes{mountpoint="/data"} < 0.15
for: 5m
labels:
severity: warning
annotations:
summary: "Disk space below 15%"
- alert: DiskSpaceCritical
expr: node_filesystem_avail_bytes{mountpoint="/data"} / node_filesystem_size_bytes{mountpoint="/data"} < 0.05
for: 1m
labels:
severity: critical
annotations:
summary: "Disk space below 5%"

Keep only recent blocks (e.g., last 10,000):

Terminal window
# Pruning is configured via node config
--pruning-keep-recent 10000

State checkpoints are stored and accessible via RPC (NodeRpcV1.list_checkpoints, NodeRpcV1.get_checkpoint).

Note: Automatic checkpoint interval configuration (--checkpoint-interval) is not yet implemented. Checkpoints are currently created during state sync and snapshot operations.

Configure log rotation for node logs:

/etc/logrotate.d/ashen-node
/var/log/ashen-node/*.log {
daily
rotate 7
compress
delaycompress
missingok
notifempty
}
required_disk = current_usage
+ (daily_growth * retention_days)
+ (headroom_factor * 1.2)

For a testnet validator running 6 months:

  • Current usage: 20 GB
  • Daily growth: ~300 MB
  • Retention: 180 days
  • Headroom factor: 1.2
required_disk = 20 GB + (0.3 GB * 180) + (54 GB * 0.2)
= 20 GB + 54 GB + 10.8 GB
= ~85 GB minimum
= ~125 GB recommended (with buffer)
  • Required: NVMe SSD for validators
  • Acceptable: SATA SSD for archive/RPC nodes
  • Not recommended: HDD (too slow for consensus)
Node TypeMinimum IOPSRecommended IOPS
Validator5,00010,000+
RPC Node3,0005,000+
Archive1,0003,000+
  • Recommended: ext4 or XFS
  • Mount options: noatime,nodiratime