ZFS RAID Calculator

Calculate usable storage, parity overhead, and fault tolerance for ZFS pools

Single parity — survives 1 disk failure per vdev

6 disks per vdev (1 parity + 5 data)

Standby disks for automatic replacement

Pool Configuration

81%
Parity
Usable Slop Space Parity / Mirror

Raw Capacity

24.00 TB

6 × 4 TB

Usable Capacity

19.38 TB

80.7% efficiency

Parity / Mirror Overhead

4.00 TB

16.7% of raw

Slop Space (Metadata)

640.00 GB

1/32 (~3.1%) of zpool capacity

Fault Tolerance

1 disk

1 disk(s) per vdev × 1 vdev(s)

Layout: 1 × RAIDZ1 vdev (6 disks each)

ZFS RAID Level Comparison

LevelMin DisksFault ToleranceSpace EfficiencyBest For
Mirror2n-1 per vdev50% (2-way)Boot drives, high IOPS, small pools
RAIDZ131 per vdev(n-1)/nSmall pools, non-critical data
RAIDZ242 per vdev(n-2)/nMost use cases, recommended default
RAIDZ353 per vdev(n-3)/nLarge drives, archival, maximum safety
Stripe1None100%Scratch space, easily replaceable data
100% Client-Side Calculator

All calculations run in your browser. No data is sent to any server.

What is ZFS?

ZFS (Zettabyte File System) is a combined filesystem and volume manager originally developed by Sun Microsystems for Solaris. Now available on FreeBSD, Linux (via OpenZFS), and other platforms, ZFS is widely regarded as the most advanced filesystem for data storage. It eliminates the need for separate RAID controllers, volume managers, and filesystem layers by integrating all three into a single, coherent system.

ZFS uses a copy-on-write (CoW) transactional model — data is never overwritten in place. Instead, new data is written to a different location, and the metadata pointers are updated atomically. This design means the filesystem is always consistent, even after a power failure or crash. There is no need for fsck or filesystem repair tools.

Key ZFS features

Data Integrity

Every block is checksummed (SHA-256 or fletcher4). ZFS detects and automatically repairs silent data corruption (bit rot) using redundant copies.

Snapshots & Clones

Instant, space-efficient snapshots for backups and rollbacks. Clones create writable copies from snapshots with zero initial overhead.

Compression

Transparent inline compression (LZ4, ZSTD, GZIP). LZ4 is so fast it often improves performance by reducing I/O.

Native Encryption

Dataset-level AES-256-GCM encryption. Encrypted datasets can be sent via zfs send without exposing data.

At NetOz, we use ZFS on our storage servers in the Adelaide data centre for customer backups and file storage. Its self-healing capabilities and snapshot support make it ideal for hosting infrastructure where data integrity is critical. A scheduled zfs send pipeline replicates snapshots to an off-site backup server for disaster recovery.

ZFS RAID Levels Explained

ZFS implements its own RAID system called RAIDZ, which fixes several fundamental problems with traditional hardware RAID. Unlike hardware RAID5/6, RAIDZ uses variable-width stripes that eliminate the "write hole" vulnerability — a condition where a power failure during a write can leave parity and data out of sync, silently corrupting data.

Mirror (2-way, 3-way)

Every disk in the vdev holds a complete copy of the data. A 2-way mirror uses 50% of raw capacity for redundancy; a 3-way mirror uses 67%. Mirrors provide the best random read and write IOPS because reads can be served from any copy and writes go to all copies in parallel. Use mirrors for boot drives, databases, and any workload that needs high IOPS. Striped mirrors (multiple mirror vdevs) are the recommended layout for performance-sensitive applications.

RAIDZ1 (Single Parity)

Equivalent to RAID5 — one disk per vdev is used for parity. Survives one disk failure per vdev. With modern large drives (8TB+), resilver times can be 12–24 hours or more, during which a second failure destroys the vdev. For this reason, RAIDZ1 is not recommended for drives larger than 4TB or pools storing critical data. Best used for small pools with 3–5 smaller disks.

RAIDZ2 (Double Parity)

Equivalent to RAID6 — two disks per vdev are used for parity. Survives two simultaneous disk failures per vdev. RAIDZ2 is the recommended default for most ZFS deployments. It balances storage efficiency with safety, especially for large drives where resilver times are long. The optimal vdev width is 6–8 disks (4–6 data disks + 2 parity disks).

RAIDZ3 (Triple Parity)

Three disks per vdev for parity. Survives three simultaneous failures. Used for wide vdevs with many large disks (12+ drives of 8TB or larger), archival storage, or environments where the resilver of one drive might take days. The extra parity disk costs little compared to the insurance it provides on large arrays.

Striped Mirrors (Recommended for Performance)

Multiple mirror vdevs striped together. For example, 8 disks as 4 × 2-way mirrors gives you the capacity of 4 disks with excellent read/write IOPS and the ability to survive one failure per mirror pair. This is the layout ZFS experts recommend most often for databases, VMs, and general-purpose NAS storage. It offers the best balance of performance, redundancy, and ease of expansion (you can add mirror pairs later).

ZFS Storage Planning

Planning a ZFS pool requires balancing capacity, performance, and fault tolerance. The choices you make at pool creation are largely permanent — you cannot change a vdev's RAID level or width after creation (though you can add new vdevs to a pool).

1

Choose Vdev Width Carefully

For RAIDZ, the recommended vdev widths are: RAIDZ1 with 3–5 disks, RAIDZ2 with 4–8 disks, RAIDZ3 with 5–12 disks. Wider vdevs are more space-efficient but slower for random IOPS and take longer to resilver. Multiple narrow vdevs (e.g., 2 × 6-disk RAIDZ2) outperform a single wide vdev (1 × 12-disk RAIDZ2) for the same number of disks.

2

Use Hot Spares

Hot spares sit idle until a disk fails, at which point ZFS automatically begins resilvering onto the spare. This reduces the window of vulnerability during which a second failure could cause data loss. For production pools, one spare per 10–20 disks is a common recommendation.

3

Account for ZFS Overhead

This calculator shows raw usable capacity. In practice, ZFS reserves a small portion (~1.6%) for metadata (the slop space). With compression enabled (LZ4 is recommended), you may actually store more data than the raw usable number suggests — compressible data like logs, text, and databases can achieve 2:1 or better compression ratios.

4

Plan for Expansion

You can expand a ZFS pool by adding new vdevs, but you cannot add disks to an existing RAIDZ vdev (this is changing with RAIDZ expansion in OpenZFS 2.3+). If you anticipate growth, striped mirrors are the easiest to expand — just add another mirror pair. For RAIDZ, plan your final vdev count upfront.

Example: NetOz backup server pool creation

# Create a RAIDZ2 pool with 2 vdevs and 1 hot spare
zpool create backup \
  raidz2 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf \
  raidz2 /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl \
  spare /dev/sdm

# Enable compression and set record size
zfs set compression=lz4 backup
zfs set recordsize=1M backup

# Create datasets for customer backups
zfs create backup/customers
zfs set quota=500G backup/customers/netoz-client-01

Calculate subnet allocations with the Subnet Calculator, generate systemd service files with the Systemd Unit Generator, or create web server configs with the Web Server Config Generator.