Install ZFS to Ubuntu Server 19.04 and build RAID1

Circumstance

Less than half a year to move the PC to homebrew NAS. Because of lack of budget, My NAS has single HDD,
but now, I want to buy additional HDD and build mirroring to save my data

At first, i was going to build a raid1 with ext4 and motherboard function, but after research, I decided to install a file system called ZFS. The reasons are following,

Configuration

Hardware

Part Name
Cpu Ryzen2400G
M Asus B450-i Rog strix
Memory kingston KVR24N17S8/8
Power Corsair SF450 PLATINUM
Hdd WesternDigital WD40EZRZ-RT2

Software

Ubuntu Server 19.04 Running as an smb server

Prepare

Steps

ZFS in Linux Installation

ubuntu19.04 seems to have the official repository, so it was able to install normally apt install.

$ sudo apt install --yes debootstrap gdisk zfs-initramfs

Check the additional hard drives

$ sudo fdisk -l

(Abbreviated)

Disk/dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EZRZ-00G
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: CBCBA5BF-DD51-44FB-A592-E3579550A2C4

Device Start End Sectors Size Type
/dev/sda1 2048 6144002047 6144000000 2.9T Linux filesystem

Disk/dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EZRZ-00G
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

I found that the existing HDD is /dev/sda, and the hard drive I added is /dev/sdb.

So first, set gpt label to /dev/sdb.

$ sudo parted /dev/sdb
(parted) mklabel gpt

Check the ID of the hard drive. (Names such as /dev/sda are allocated with booting and are not fixed)

$ ls /dev/disk/by-id/

Hdd’s ID seems to have several formats because the output of this command has multiple names for one HDD.
I decided to use the following two this time.

wwn-0x50014ee266279d3a -> .. /.. /sda
wwn-0x50014ee210f658e7 -> .. /.. /sdb

Create zpool (no RAID)

I also check the thing that zpool command can be used, i put out a list.

$sudo zpool list 
no pools available

Of course no pools yet.

I try to make it with HDD one to try it for the time being

$ sudo zpool create tank wwn-0x50014ee210f658e7

And output the list.

$ sudo zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 3.62T 432K 3.62T - 0% 0% 1.00x ONLINE -

$ sudo zpool status
  pool: tank
 state: ONLINE
  scan: none requested
config:

    NAME STATE READ WRITE CKSUM
    tank ONLINE 0 0 0 0
      wwn-0x50014ee210f658e7 ONLINE 0 0 0

I also saw what is going on partition at this point.

Disk/dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EZRZ-00G
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 923EA8D0-BFE5-BE44-BFE7-C0B74BD4819E

Device Start End Sectors Size Type
/dev/sdb1 2048 7814019071 7814017024 3.7T Solaris /usr & Apple ZFS
/dev/sdb9 7814019072 7814035455 16384 8M Solaris reserved 1

When you create a zpool with a HDD, two partition seems to be created . I’m interested what the 8MB partition is for, but this time , I dont touch this and go next step .

Delete zpool

I thought adding a disk for mirroring later, but at this point I couldn’t find a way to do it, so I redesigned it with mirroring from the beginning.

$sudo zpool destroy tank

(As a results, It seems to be able to add mirroring drive afterward . Connecting and detaching devices in a storage pool)

Creating zpool (RAID1)

$ sudo zpool create tank mirror wwn-0x50014ee266279d3a wwn-0x50014ee210f658e7

Create ZFS

$ sudo zfs create tank/home

Check settings

Deduplication

First, make sure that the deduplication feature is disabled. It seems to be an interesting feature, but there are not enough specs to do this time.

$zfs get dedup 
NAME PROPERTY VALUE SOURCE
tank dedup off default
tank/home dedup off default

Transparent compression

I plan to use SSD separately for applications that require speed, so this time I’ll focus on capacity and compress.

$ sudo zfs set compression=on tank/home

Access time

As for the access time, it does not have to be valid, but it seems to lead to an error rarely when it is invalid, so adopt relatime as compromise proposal.

$ sudo zfs set atime=on tank
$ sudo zfs set relatime=on tank

Enabling daemons

$sudo systemctl enable zfs.target
$sudo systemctl enable zfs-import-cache
$sudo systemctl enable zfs-mount
$sudo systemctl enable zfs-import.target

Restarted to ensure that ZFS is mounted at startup.

Copy files from a backup

Copy from a backup HDD to the ZFS directory (HDD mounted to /media/hdd_backup, copy from home directory in this)

$ sudo rsync -avc /media/hdd_backup/home//tank/home/

After copying, I checked with zfs list, 1.26TB copy properly, it seems to have been recognized

$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 1.26T 2.25T 112K/tank
tank/home 1.26T 2.25T 1.26T/tank/home

Results

For now, I can use it properly.

Reference

ZfsHow to use ZFSBasic use of ZFS and benchmark comparison of ZFS and EXT4Ubuntu 18.04 Root on ZFS

Other translations