21.2 The Z File System (ZFS)

The Z file system, developed by Sun™, is a new technology designed to use a pooled storage method. This means that space is only used as it is needed for data storage. It has also been designed for maximum data integrity, supporting data snapshots, multiple copies, and data checksums. A new data replication model, known as RAID-Z has been added. The RAID-Z model is similar to RAID5 but is designed to prevent data write corruption.

21.2.1 ZFS Tuning

The ZFS subsystem utilizes much of the system resources, so some tuning may be required to provide maximum efficiency during every-day use. As an experimental feature in FreeBSD this may change in the near future; however, at this time, the following steps are recommended.

21.2.1.1 Memory

The total system memory should be at least one gigabyte, with two gigabytes or more recommended. In all of the examples here, the system has one gigabyte of memory with several other tuning mechanisms in place.

Some people have had luck using fewer than one gigabyte of memory, but with such a limited amount of physical memory, when the system is under heavy load, it is very plausible that FreeBSD will panic due to memory exhaustion.

21.2.1.2 Kernel Configuration

It is recommended that unused drivers and options be removed from the kernel configuration file. Since most devices are available as modules, they may be loaded using the /boot/loader.conf file.

Users of the i386™ architecture should add the following option to their kernel configuration file, rebuild their kernel, and reboot:

options 	KVA_PAGES=512

This option will expand the kernel address space, thus allowing the vm.kvm_size tunable to be pushed beyond the currently imposed limit of 1 GB (2 GB for PAE). To find the most suitable value for this option, divide the desired address space in megabytes by four (4). In this case, it is 512 for 2 GB.

21.2.1.3 Loader Tunables

The kmem address space should be increased on all FreeBSD architectures. On the test system with one gigabyte of physical memory, success was achieved with the following options which should be placed in the /boot/loader.conf file and the system restarted:

vm.kmem_size="330M"
vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"

For a more detailed list of recommendations for ZFS-related tuning, see http://wiki.freebsd.org/ZFSTuningGuide.

21.2.2 Using ZFS

There is a start up mechanism that allows FreeBSD to mount ZFS pools during system initialization. To set it, issue the following commands:

# echo 'zfs_enable="YES"' >> /etc/rc.conf
# /etc/rc.d/zfs start

The remainder of this document assumes three SCSI disks are available, and their device names are da0, da1 and da2. Users of IDE hardware may use the ad devices in place of SCSI hardware.

21.2.2.1 Single Disk Pool

To create a simple, non-redundant ZFS pool using a single disk device, use the zpool command:

# zpool create example /dev/da0

To view the new pool, review the output of df:

# df
Filesystem  1K-blocks    Used    Avail Capacity  Mounted on
/dev/ad0s1a   2026030  235230  1628718    13%    /
devfs               1       1        0   100%    /dev
/dev/ad0s1d  54098308 1032846 48737598     2%    /usr
example      17547136       0 17547136     0%    /example

This output clearly shows the example pool has not only been created but mounted as well. It is also accessible just like a normal file system, files may be created on it and users are able to browse it as in the following example:

# cd /example
# ls
# touch testfile
# ls -al
total 4
drwxr-xr-x   2 root  wheel    3 Aug 29 23:15 .
drwxr-xr-x  21 root  wheel  512 Aug 29 23:12 ..
-rw-r--r--   1 root  wheel    0 Aug 29 23:15 testfile

Unfortunately this pool is not taking advantage of any ZFS features. Create a file system on this pool, and enable compression on it:

# zfs create example/compressed
# zfs set compression=gzip example/compressed

The example/compressed is now a ZFS compressed file system. Try copying some large files to it by copying them to /example/compressed.

The compression may now be disabled with:

# zfs set compression=off example/compressed

To unmount the file system, issue the following command and then verify by using the df utility:

# zfs umount example/compressed
# df
Filesystem  1K-blocks    Used    Avail Capacity  Mounted on
/dev/ad0s1a   2026030  235232  1628716    13%    /
devfs               1       1        0   100%    /dev
/dev/ad0s1d  54098308 1032864 48737580     2%    /usr
example      17547008       0 17547008     0%    /example

Re-mount the file system to make it accessible again, and verify with df:

# zfs mount example/compressed
# df
Filesystem         1K-blocks    Used    Avail Capacity  Mounted on
/dev/ad0s1a          2026030  235234  1628714    13%    /
devfs                      1       1        0   100%    /dev
/dev/ad0s1d         54098308 1032864 48737580     2%    /usr
example             17547008       0 17547008     0%    /example
example/compressed  17547008       0 17547008     0%    /example/compressed

The pool and file system may also be observed by viewing the output from mount:

# mount
/dev/ad0s1a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/ad0s1d on /usr (ufs, local, soft-updates)
example on /example (zfs, local)
example/data on /example/data (zfs, local)
example/compressed on /example/compressed (zfs, local)

As observed, ZFS file systems, after creation, may be used like ordinary file systems; however, many other features are also available. In the following example, a new file system, data is created. Important files will be stored here, so the file system is set to keep two copies of each data block:

# zfs create example/data
# zfs set copies=2 example/data

It is now possible to see the data and space utilization by issuing df again:

# df
Filesystem         1K-blocks    Used    Avail Capacity  Mounted on
/dev/ad0s1a          2026030  235234  1628714    13%    /
devfs                      1       1        0   100%    /dev
/dev/ad0s1d         54098308 1032864 48737580     2%    /usr
example             17547008       0 17547008     0%    /example
example/compressed  17547008       0 17547008     0%    /example/compressed
example/data        17547008       0 17547008     0%    /example/data

Notice that each file system on the pool has the same amount of available space. This is the reason for using df through these examples, to show that the file systems are using only the amount of space they need and will all draw from the same pool. The ZFS file system does away with concepts such as volumes and partitions, and allows for several file systems to occupy the same pool. Destroy the file systems, and then destroy the pool as they are no longer needed:

# zfs destroy example/compressed
# zfs destroy example/data
# zpool destroy example

Disks go bad and fail, an unavoidable trait. When this disk goes bad, the data will be lost. One method of avoiding data loss due to a failed hard disk is to implement a RAID. ZFS supports this feature in its pool design which is covered in the next section.

21.2.2.2 ZFS RAID-Z

As previously noted, this section will assume that three SCSI disks exist as devices da0, da1 and da2 (or ad0 and beyond in case IDE disks are being used). To create a RAID-Z pool, issue the following command:

# zpool create storage raidz da0 da1 da2

Note: Sun recommends that the amount of devices used in a RAID-Z configuration is between three and nine. If your needs call for a single pool to consist of 10 disks or more, consider breaking it up into smaller RAID-Z groups. If you only have two disks and still require redundancy, consider using a ZFS mirror instead. See the zpool(8) manual page for more details.

The storage zpool should have been created. This may be verified by using the mount(8) and df(1) commands as before. More disk devices may have been allocated by adding them to the end of the list above. Make a new file system in the pool, called home, where user files will eventually be placed:

# zfs create storage/home

It is now possible to enable compression and keep extra copies of the user's home directories and files. This may be accomplished just as before using the following commands:

# zfs set copies=2 storage/home
# zfs set compression=gzip storage/home

To make this the new home directory for users, copy the user data to this directory, and create the appropriate symbolic links:

# cp -rp /home/* /storage/home
# rm -rf /home /usr/home
# ln -s /storage/home /home
# ln -s /storage/home /usr/home

Users should now have their data stored on the freshly created /storage/home file system. Test by adding a new user and logging in as that user.

Try creating a snapshot which may be rolled back later:

# zfs snapshot storage/home@08-30-08

Note that the snapshot option will only capture a real file system, not a home directory or a file. The @ character is a delimiter used between the file system name or the volume name. When a user's home directory gets trashed, restore it with:

# zfs rollback storage/home@08-30-08

To get a list of all available snapshots, run ls in the file system's .zfs/snapshot directory. For example, to see the previously taken snapshot, perform the following command:

# ls /storage/home/.zfs/snapshot

It is possible to write a script to perform monthly snapshots on user data; however, over time, snapshots may consume a great deal of disk space. The previous snapshot may be removed using the following command:

# zfs destroy storage/home@08-30-08

After all of this testing, there is no reason we should keep /storage/home around in its present state. Make it the real /home file system:

# zfs set mountpoint=/home storage/home

Issuing the df and mount commands will show that the system now treats our file system as the real /home:

# mount
/dev/ad0s1a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/ad0s1d on /usr (ufs, local, soft-updates)
storage on /storage (zfs, local)
storage/home on /home (zfs, local)
# df
Filesystem   1K-blocks    Used    Avail Capacity  Mounted on
/dev/ad0s1a    2026030  235240  1628708    13%    /
devfs                1       1        0   100%    /dev
/dev/ad0s1d   54098308 1032826 48737618     2%    /usr
storage       26320512       0 26320512     0%    /storage
storage/home  26320512       0 26320512     0%    /home

This completes the RAID-Z configuration. To get status updates about the file systems created during the nightly periodic(8) runs, issue the following command:

# echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf

21.2.2.3 Recovering RAID-Z

Every software RAID has a method of monitoring their state. ZFS is no exception. The status of RAID-Z devices may be viewed with the following command:

# zpool status -x

If all pools are healthy and everything is normal, the following message will be returned:

all pools are healthy

If there is an issue, perhaps a disk has gone offline, the pool state will be returned and look similar to:

  pool: storage
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Online the device using 'zpool online' or replace the device with
	'zpool replace'.
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	storage     DEGRADED     0     0     0
	  raidz1    DEGRADED     0     0     0
	    da0     ONLINE       0     0     0
	    da1     OFFLINE      0     0     0
	    da2     ONLINE       0     0     0

errors: No known data errors

This states that the device was taken offline by the administrator. This is true for this particular example. To take the disk offline, the following command was used:

# zpool offline storage da1

It is now possible to replace the da1 after the system has been powered down. When the system is back online, the following command may issued to replace the disk:

# zpool replace storage da1

From here, the status may be checked again, this time without the -x flag to get state information:

# zpool status storage
 pool: storage
 state: ONLINE
 scrub: resilver completed with 0 errors on Sat Aug 30 19:44:11 2008
config:

	NAME        STATE     READ WRITE CKSUM
	storage     ONLINE       0     0     0
	  raidz1    ONLINE       0     0     0
	    da0     ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    da2     ONLINE       0     0     0

errors: No known data errors

As shown from this example, everything appears to be normal.

21.2.2.4 Data Verification

As previously mentioned, ZFS uses checksums to verify the integrity of stored data. They are enabled automatically upon creation of file systems and may be disabled using the following command:

# zfs set checksum=off storage/home

This is not a wise idea, however, as checksums take very little storage space and are more useful when enabled. There also appears to be no noticeable costs in having them enabled. While enabled, it is possible to have ZFS check data integrity using checksum verification. This process is known as “scrubbing.” To verify the data integrity of the storage pool, issue the following command:

# zpool scrub storage

This process may take considerable time depending on the amount of data stored. It is also very I/O intensive, so much that only one of these operations may be run at any given time. After the scrub has completed, the status is updated and may be viewed by issuing a status request:

# zpool status storage
 pool: storage
 state: ONLINE
 scrub: scrub completed with 0 errors on Sat Aug 30 19:57:37 2008
config:

	NAME        STATE     READ WRITE CKSUM
	storage     ONLINE       0     0     0
	  raidz1    ONLINE       0     0     0
	    da0     ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    da2     ONLINE       0     0     0

errors: No known data errors

The completion time is in plain view in this example. This feature helps to ensure data integrity over a long period of time.

There are many more options for the Z file system, see the zfs(8) and zpool(8) manual pages.

21.2.2.5 ZFS Quotas

ZFS supports different types of quotas; the refquota, the general quota, the user quota, and the group quota. This section will explain the basics of each one, and include some usage instructions.

Quotas limit the amount of space that a dataset and its descendants can consume, and enforce a limit on the amount of space used by filesystems and snapshots for the descendants. In terms of users, quotas are useful to limit the amount of space a particular user can use.

Note: Quotas cannot be set on volumes, as the volsize property acts as an implicit quota.

The refquota, refquota=size, limits the amount of space a dataset can consume by enforcing a hard limit on the space used. However, this hard limit does not include space used by descendants, such as file systems or snapshots.

To enforce a general quota of 10 GB for storage/home/bob, use the following:

# zfs set quota=10G storage/home/bob

User quotas limit the amount of space that can be used by the specified user. The general format is userquota@user=size, and the user's name must be in one of the following formats:

  • POSIX compatible name (e.g., joe).

  • POSIX numeric ID (e.g., 789).

  • SID name (e.g., joe.bloggs@example.com).

  • SID numeric ID (e.g., S-1-123-456-789).

For example, to enforce a quota of 50 GB for a user named joe, use the following:

# zfs set userquota@joe=50G

To remove the quota or make sure that one is not set, instead use:

# zfs set userquota@joe=none

User quota properties are not displayed by zfs get all. Non-root users can only see their own quotas unless they have been granted the userquota privilege. Users with this privilege are able to view and set everyone's quota.

The group quota limits the amount of space that a specified user group can consume. The general format is groupquota@group=size.

To set the quota for the group firstgroup to 50 GB, use:

# zfs set groupquota@firstgroup=50G

To remove the quota for the group firstgroup, or make sure that one is not set, instead use:

# zfs set groupquota@firstgroup=none

As with the user quota property, non-root users can only see the quotas associated with the user groups that they belong to, however a root user or a user with the groupquota privilege can view and set all quotas for all groups.

The zfs userspace subcommand displays the amount of space consumed by each user on the specified filesystem or snapshot, along with any specified quotas. The zfs groupspace subcommand does the same for groups. For more information about supported options, or only displaying specific options, see zfs(1).

To list the quota for storage/home/bob, if you have the correct privileges or are root, use the following:

# zfs get quota storage/home/bob

21.2.2.6 ZFS Reservations

ZFS supports two types of space reservations. This section will explain the basics of each one, and include some usage instructions.

The reservation property makes it possible to reserve a minimum amount of space guaranteed for a dataset and its descendants. This means that if a 10 GB reservation is set on storage/home/bob, if disk space gets low, at least 10 GB of space is reserved for this dataset. The refreservation property sets or indicates the minimum amount of space guaranteed to a dataset excluding descendants, such as snapshots. As an example, if a snapshot was taken of storage/home/bob, enough disk space would have to exist outside of the refreservation amount for the operation to succeed because descendants of the main data set are not counted by the refreservation amount and so do not encroach on the space set.

Reservations of any sort are useful in many situations, for example planning and testing the suitability of disk space allocation in a new system, or ensuring that enough space is available on file systems for system recovery procedures and files.

The general format of the reservation property is reservation=size, so to set a reservation of 10 GB on storage/home/bobthe below command is used:

# zfs set reservation=10G storage/home/bob

To make sure that no reservation is set, or to remove a reservation, instead use:

# zfs set reservation=none storage/home/bob

The same principle can be applied to the refreservation property for setting a refreservation, with the general format refreservation=size.

To check if any reservations or refreservations exist on storage/home/bob, execute one of the following commands:

# zfs get reservation storage/home/bob
# zfs get refreservation storage/home/bob