Combustible Systems

A hobby sysadmin blog

Linux Mint ZFS root, full disk encryption, hibernation, encrypted swap

This setup tested with Linux Mint 17. No debootstrap required!

Goals

  • Fully encrypted system with three partitions: root, home, swap
  • Root and home partitions ZFS mirrors
  • Swap is encrypted, and raid 1 for high availability
  • Working Hibernation
  • Only one password to enter at boot time

Resources

Installation Media Preparation

The first step is to setup a bootable USB stick or Live CD. I suggest the USB stick, since lately Linux Mint / Ubuntu have added the ability to add persistent data onto the image. This comes in handy when you're having to boot back into the USB stick for troubleshooting if anything goes wrong.

I created a Linux Mint 17 bootable USB stick, with several GB of persistence by downloading the image from Linux Mint, and using unetbootin to flash it to a drive.

Something to note is that I will be doing a UEFI-based installation (as compared to MBR). This is handy if you're also going to dual-boot a recent Windows installation which is already UEFI.

Now, boot the USB live media. If you're intending on a UEFI install, make sure you boot it in UEFI mode.

All commands in this guide probably need to be run as root. It's annoying for copy-paste, but I've prefixed all commands with # to make clear the distinction between commands and responses.

First, install some packages on the installation media itself.

# apt-add-repository --yes ppa:zfs-native/stable
# apt-get update
# apt-get install build-essential
# apt-get install spl-dkms zfs-dkms ubuntu-zfs

Note zfs suggested packages, which might be interesting later.

Suggested packages:
 zfs-auto-snapshot nfs-kernel-server zfs-initramfs

Check that the kernel module got loaded successfully.

# modprobe zfs
# dmesg | grep ZFS:
[ 2092.599329] ZFS: Loaded module v0.6.3-2~trusty, ZFS pool version 5000, ZFS filesystem version 5

Partitioning

Partition each disk (sdx becomes sda or sdb, respectively). I used two identical hard drives for this build, to make life easy. To do the partitioning, I used gparted. Whatever works for you is fine.

Note that because I'm using EFI, I needed BOTH an EFI partition and a BOOT partition. If you're doing MBR, you should just need a BOOT partition.

You'll also note that for some reason these instructions skip sdx5. I had an existing partition here that I specifically didn't want to modify. Change your setup accordingly.

Here's how I set up my drives:

  • Create GPT partition table
  • sdx1 - 512 MiB fat32 (hddX-boot-efi) - Set flag to 'boot', set label to 'hddXbootefi'
  • sdx2 - 512 MiB ext4 (hddX-boot) - Set label to 'hddXboot'
  • sdx3 - 20000 MiB unformatted (hddX-swap) - Set flag to 'raid'
  • sdx4 - 100000 MiB unformatted (hddX-root)
  • sdx6 - 414488 MiB unformatted (hddX-data)

Now that we have the basic partitions, set up mdadm to raid the swap. Swap gets striped first with mdadm, so anything saved to swap gets encrypted once then distributed to the two disks for high availability.

# apt-get install mdadm
# mdadm --create /dev/md/md0-swap --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3

Have a look in mdstat to see that the raid array is running.

# cat /proc/mdstat

Setting up encrypted containers

Now we need to create the encrypted containers. The idea is that the underlying data is encrypted on the drive, and ZFS will be layered on top of these encrypted containers.

I used the same passphrase for each volume, to make recovery easier later. I'm not going to go into security here, if you aren't really familiar with disk encryption on Linux I'd suggest reading about it. Use a reasonable password, etc.

# cryptsetup -y -v luksFormat /dev/sda4
# cryptsetup -y -v luksFormat /dev/sda6
# cryptsetup -y -v luksFormat /dev/sdb4
# cryptsetup -y -v luksFormat /dev/sdb6
# cryptsetup -y -v luksFormat /dev/md/md0-swap

Now that partitions are created, get the partition identifiers. I will use these as much as possible to address the disks. If you're following along, maybe make a copy of this page and find+replace the ones below with yours to make things easier.

# blkid
/dev/sda1: UUID="F3B3-2E0F" TYPE="vfat" LABEL="HDD1BOOTEFI"
/dev/sda2: UUID="999bee5f-2f0a-4429-bbb8-85b6956bbcc3" TYPE="ext4" LABEL="hdd1boot"
/dev/sda3: UUID="03a41c88-41a9-c7ed-6e3d-34e02470ef87" UUID_SUB="ce7f7c53-d6c5-8e8e-52fb-11c4379084df" LABEL="mint:md0-swap" TYPE="linux_raid_member"
/dev/sda4: UUID="a4899072-54f8-45b4-ae89-ad870a12ff50" TYPE="crypto_LUKS"
/dev/sda6: UUID="c79e2f23-90a8-4935-a485-930891d80e5e" TYPE="crypto_LUKS"
/dev/sdb1: UUID="7CC3-B53C" TYPE="vfat" LABEL="HDD2BOOTEFI"
/dev/sdb2: LABEL="hdd2boot" UUID="0f669718-f0e0-4a21-819c-9ced33c30c59" TYPE="ext4"
/dev/sdb3: UUID="03a41c88-41a9-c7ed-6e3d-34e02470ef87" UUID_SUB="162253f4-d465-057b-1a82-158501c181c4" LABEL="mint:md0-swap" TYPE="linux_raid_member"
/dev/sdb4: UUID="1dee6dae-8684-46d6-a932-eaba5e32f6ac" TYPE="crypto_LUKS"
/dev/sdb6: UUID="fab2b777-cd44-4358-8606-55a6b8af13ba" TYPE="crypto_LUKS"
/dev/md127: UUID="39124c86-b25b-4de8-8072-6ba107f19b7e" TYPE="crypto_LUKS"

Open encrypted volumes

# cryptsetup luksOpen /dev/disk/by-uuid/a4899072-54f8-45b4-ae89-ad870a12ff50 crypt-hdd1-root
# cryptsetup luksOpen /dev/disk/by-uuid/c79e2f23-90a8-4935-a485-930891d80e5e crypt-hdd1-data
# cryptsetup luksOpen /dev/disk/by-uuid/1dee6dae-8684-46d6-a932-eaba5e32f6ac crypt-hdd2-root
# cryptsetup luksOpen /dev/disk/by-uuid/fab2b777-cd44-4358-8606-55a6b8af13ba crypt-hdd2-data
# cryptsetup luksOpen /dev/disk/by-uuid/39124c86-b25b-4de8-8072-6ba107f19b7e crypt-swap

Take a peek at /dev/mapper

# ls -l /dev/mapper
total 0
crw------- 1 root root 10, 236 Aug 17 22:47 control
lrwxrwxrwx 1 root root 7 Aug 18 02:08 crypt-hdd1-data -> ../dm-2
lrwxrwxrwx 1 root root 7 Aug 18 02:08 crypt-hdd1-root -> ../dm-1
lrwxrwxrwx 1 root root 7 Aug 18 02:08 crypt-hdd2-data -> ../dm-5
lrwxrwxrwx 1 root root 7 Aug 18 02:08 crypt-hdd2-root -> ../dm-4
lrwxrwxrwx 1 root root 7 Aug 18 02:35 crypt-swap -> ../dm-0

Install pv, a tool useful to monitor the volume of piped data.

# apt-get install pv

Zero out the encrypted volumes. This effectively writes random data to the entire hard drive through the encryption mechanism. This makes it really hard to tell from a static image how full the drive is with actual data, since it will appear to be 100% full of random bits. I really don't suggest skipping this step, because it'll be impossible to do this once you have data on the device.

The warning should be obvious: THIS WILL OVERWRITE EVERYTHING ON THE TARGET DEVICES. FOREVER. Don't screw it up if you have other partitions that you care about.

I've chained these commands to make it easy to walk away and still get them all run. This will take a while. For best results, run each hard disk in a different window simultaneously. WARNING: This will permanently erase everything on your disks. It'

# dd if=/dev/zero bs=64K | pv | dd of=/dev/mapper/crypt-swap bs=64K

# dd if=/dev/zero bs=64K | pv | dd of=/dev/mapper/crypt-hdd1-root bs=64K ; \
dd if=/dev/zero bs=64K | pv | dd of=/dev/mapper/crypt-hdd1-data bs=64K

# dd if=/dev/zero bs=64K | pv | dd of=/dev/mapper/crypt-hdd2-root bs=64K ; \
dd if=/dev/zero bs=64K | pv | dd of=/dev/mapper/crypt-hdd2-data bs=64K

Setting up partitions (swap + ZFS)

Set up swap

# mkswap -f /dev/mapper/crypt-swap
Setting up swapspace version 1, size = 20461436 KiB
no label, UUID=3ccce1a1-b612-47d3-9bd7-9d6bd4b3e537

Figure out system block size.

# lsblk -o NAME,PHY-SeC
NAME PHY-SEC
sda 512
├─sda1 512
├─sda2 512
│ └─md127 512
│ └─crypt-swap (dm-0) 512
├─sda3 512
│ └─crypt-hdd1-root (dm-1) 512
├─sda4 512
│ └─crypt-hdd1-data (dm-2) 512
└─sda5 512
sdb 512
├─sdb1 512
├─sdb2 512
│ └─md127 512
│ └─crypt-swap (dm-0) 512
├─sdb3 512
│ └─crypt-hdd2-root (dm-4) 512
└─sdb4 512
 └─crypt-hdd2-data (dm-5) 512

Looks like my system uses 512 byte blocks. This corresponds to ZFS ashift=9. If you had 4096 (4k) blocks, you might use ashift=12. From what I can tell, the warning about ashift=12 on ZFS-on-linux's tutorial is no longer relevant. I've built this on two systems, one using ashift=12 and one ashift=9 and both work fine.

Create ZFS pools - pc-rpool and pc-dpool. Note that you can choose whatever names you want. The ZFS tutorial suggests choosing something unique to make recovery easier if it comes to that (contrary to several guides online, don't use 'rpool'). I promise, it'll work fine.

# zpool create -o ashift=9 pc-rpool mirror /dev/mapper/crypt-hdd1-root /dev/mapper/crypt-hdd2-root
# zpool create -o ashift=9 pc-dpool mirror /dev/mapper/crypt-hdd1-data /dev/mapper/crypt-hdd2-data

Create some ZFS filesystems.

# zfs create pc-rpool/ROOT
# zfs create pc-dpool/HOME

Take a look

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pc-dpool 147K 398G 30K /pc-dpool
pc-dpool/HOME 30K 398G 30K /pc-dpool/HOME
pc-rpool 150K 96.0G 31K /pc-rpool
pc-rpool/ROOT 30K 96.0G 30K /pc-rpool/ROOT

Set some useful properties. Specifically, turn access times off and change the mountpoint. Consider also adding compression or NFS sharing capabilities. Here's a nice link with more info about properties.

# zfs set atime=off pc-dpool/HOME
# zfs set atime=off pc-rpool/ROOT
# zfs set mountpoint=none pc-rpool
# zfs set mountpoint=none pc-dpool
# zfs set mountpoint=/ pc-rpool/ROOT
cannot mount '/': directory is not empty
property may be set but unable to remount filesystem
# zfs set mountpoint=/home pc-dpool/HOME
cannot mount '/home': directory is not empty
property may be set but unable to remount filesystem
# zpool set bootfs=pc-rpool/ROOT pc-rpool

Now you must export the pool to allow it to be imported elsewhere.

# zpool export pc-rpool
# zpool export pc-dpool

System Installation

Now for the ugly part. (I'd love a better way to do this, but this does work fine). We'll use the linux mint installer to install to a temporary partition, then move the system to the zfs drive. You can use a flash drive, or maybe even a temporary partition. If you're really pressed for space, you could offline one of the ZFS or swap partitions and install to that and then rebuild the array later.

In my case, I used /dev/sdb5.  During installation, you definitely want to use the crypt_swap partition you already created - this will eliminate having to do swap configuration to get hibernation working later. I also used /dev/sda1 as an EFI boot partition, and /dev/sda2 as boot partition. Explicitly do not use sdb1 and sdb2.

Once the install is done, have to copy the files to the real root partition. First, mount the two devices.

# mkdir /mnt/temproot
# mkdir /mnt/root
# mount /dev/sdb5 /mnt/temproot
# zpool import -R /mnt/root pc-rpool

Check that it got mounted correctly (specifically, that pc-rpool/ROOT got mounted on /mnt/root)

# mount
(several lines)
/dev/sdb5 on /mnt/temproot type ext4 (rw)
pc-rpool/ROOT on /mnt/root type zfs (rw,noatime,xattr)

Copy the system over to the ZFS root partition. I looked around a bit for the best rsync command to use, and settled on this one from archlinux's guide to transferring your system to raid. Your mileage may vary.

# rsync -avxHAXS --delete /mnt/temproot/ /mnt/root

This creates some stuff in /mnt/root/home, which doesn't belong on this partition. Now we need to transfer it correctly to the ZFS home directory.

# rm -rf /mnt/root/home/*
# zpool import -R /mnt/root pc-dpool

Make sure things are mounted sensibly

# mount
(several lines)
/dev/sdb5 on /mnt/temproot type ext4 (rw)
pc-rpool/ROOT on /mnt/root type zfs (rw,noatime,xattr)
pc-dpool/HOME on /mnt/root/home type zfs (rw,noatime,xattr)

Do the transfer of home data to the correct partition

# rsync -avxHAXS --delete /mnt/temproot/home/ /mnt/root/home

Modifying the new installation

Set up a chroot

Now we need to configure the system. Set up mounts for a chroot:

# umount /mnt/temproot
# mount /dev/disk/by-uuid/0ed02d74-856d-4302-a25b-9039f8c03d10 /mnt/root/boot
# mount /dev/disk/by-uuid/F3B3-2E0F /mnt/root/boot/efi
# mount --bind /dev /mnt/root/dev
# mount --bind /proc /mnt/root/proc
# mount --bind /sys /mnt/root/sys

Configure resolv.conf so networking will work:

# cp /etc/resolv.conf /mnt/root/etc/resolv.conf

Chroot into the new system:

# chroot /mnt/root /bin/bash --login

Install packages

Upgrade and install packages in the new environment:

# locale-gen en_US.UTF-8
# apt-add-repository --yes ppa:zfs-native/stable
# apt-get update
# apt-get install build-essential
# apt-get install spl-dkms zfs-dkms ubuntu-zfs
# apt-get install zfs-initramfs
# apt-get install mdadm
# apt-get dist-upgrade

Add encryption support to the initial ramdisk

This is one of the ugly steps I'm not completely happy with - adding the zfs root to your /etc/fstab. This isn't because it's necessary to mount, but because the initramfs-tools script cryptroot checks /etc/fstab to see if the root device is a crypto device. cryptroot will only install the right modules into the initial ramdisk if it thinks you have an encrypted root partition. This is annoying, but since ZFS ignores entries in /etc/fstab, it shouldn't be a problem. My /etc/fstab looks like the below:

UUID=0ed02d74-856d-4302-a25b-9039f8c03d10 /boot ext4 defaults 0 2
UUID=F3B3-2E0F /boot/efi vfat defaults 0 1
/dev/mapper/crypt-swap none swap sw 0 0
/dev/mapper/crypt-hdd1-root / zfs defaults 0 0

Now the cryptroot script should be able to detect that we have an encrypted partition for the root device. Unfortunately, it still won't be able to determine which devices need to be unlocked to boot. We need to create a file with this information. For now, it'll just ask for a password to unlock each device. Later in this article there's information about how to set it require only one password. Create /etc/initramfs-tools/conf.d/cryptroot with the following content:

target=crypt-swap,source=UUID=39124c86-b25b-4de8-8072-6ba107f19b7e,key=none
target=crypt-hdd1-root,source=UUID=a4899072-54f8-45b4-ae89-ad870a12ff50,key=none,rootdev
target=crypt-hdd1-data,source=UUID=c79e2f23-90a8-4935-a485-930891d80e5e,key=none
target=crypt-hdd2-root,source=UUID=1dee6dae-8684-46d6-a932-eaba5e32f6ac,key=none,rootdev
target=crypt-hdd2-data,source=UUID=fab2b777-cd44-4358-8606-55a6b8af13ba,key=none

GRUB ZFS entry

Now we have to convince grub to generate a reasonable boot entry. I originally did this by creating a /etc/grub.d/40_custom file with a booting configuration. I've left that below, for reference. However, for a better solution, see the next paragraph and skip the following chunk.

menuentry 'Linux Mint 17 Qiana' --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-/dev/mapper/crypt_hddX_root' {
 load_video
 gfxmode $linux_gfx_mode
 insmod part_gpt
 insmod ext2
 set root='hd0,gpt2'
 if [ x$feature_platform_search_hint = xy ]; then
 search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 0ed02d74-856d-4302-a25b-9039f8c03d10
 else
 search --no-floppy --fs-uuid --set=root 0ed02d74-856d-4302-a25b-9039f8c03d10
 fi
 linux /vmlinuz-3.13.0-24-generic rpool=pc-rpool boot=zfs ro quiet splash $vt_handoff
 initrd /initrd.img-3.13.0-24-generic
}

Rather than create a custom entry that you'll have to tinker with every time you update your kernel, I modified the grub script grub.d/10_linux with some rudimentary ZFS support. If you're doing this on Ubuntu, edit the file in /etc/grub.d/10_linux. Otherwise, if you're on Linux Mint, apparently the file gets overwritten every time by the Mint customizer. In that case, you need to edit the file in /usr/share/ubuntu-system-adjustments/grub/10_linux.

First, back up the file.

# cp /usr/share/ubuntu-system-adjustments/grub/10_linux /usr/share/ubuntu-system-adjustments/grub/10_linux.orig

Here are the changes I made to 10_linux. If you're not familiar with unified diffs, only the lines with +'s are added, the rest are for context. Don't add the plus signs...

This works by detecting which zfs partition has a mountpoint of '/', gets it's pool name and adds it to to the grub command line as rpool=$(rootpool)

diff -u /usr/share/ubuntu-system-adjustments/grub/10_linux.orig /usr/share/ubuntu-system-adjustments/grub/10_linux
--- /usr/share/ubuntu-system-adjustments/grub/10_linux.orig     2014-09-01 19:16:26.941703165 -0700
+++ /usr/share/ubuntu-system-adjustments/grub/10_linux  2014-09-01 19:17:59.475399076 -0700
@@ -65,6 +65,15 @@
   fi
 fi

+if [ "x`${grub_probe} --device ${GRUB_DEVICE} --target=fs 2>/dev/null || true`" = xzfs ]; then
+  rootpool="`zfs list | grep '[[:space:]]/$' | sed -e 's|\([^/\t ]*\).*|\1|g'`"
+  if [ "x${rootpool}" != x ]; then
+    GRUB_CMDLINE_LINUX="boot=zfs rpool=${rootpool} ${GRUB_CMDLINE_LINUX}"
+    # Unset the root device variable because zfs doesn't use it
+    LINUX_ROOT_DEVICE=""
+  fi
+fi
+
 for word in $GRUB_CMDLINE_LINUX_DEFAULT; do
   if [ "$word" = splash ]; then
     GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT \$vt_handoff"

If you're on Linux Mint, you have to manually copy this file to where it normally lives in /etc/grub.d

# cp /usr/share/ubuntu-system-adjustments/grub/10_linux /etc/grub.d/10_linux

One more thing I had to change was to comment out the single line in /etc/default/grub.d/dmraid2mdadm.cfg. For some reason, this file adds nomdmonddf nomdmonisw to the kernel boot line, which seems to disable mdadm. We definitely don't want that, so we turn it off.

If you try to update-grub at this point, you'll get an annoying error about missing /dev/crypt-* files. For some reason, grub isn't looking in /dev/mapper. Can fix this with the following symbolic links; later on in this article there's information about how to fix this permanently by creating the symbolic links automatically.

# ln -s /dev/mapper/crypt-hdd1-root /dev/crypt-hdd1-root
# ln -s /dev/mapper/crypt-hdd2-root /dev/crypt-hdd2-root
# ln -s /dev/mapper/crypt-hdd1-data /dev/crypt-hdd1-data
# ln -s /dev/mapper/crypt-hdd2-data /dev/crypt-hdd2-data
# ln -s /dev/mapper/crypt-swap /dev/crypt-swap

Make sure grub is recognizing the root file system

# grub-probe /
zfs

Update grub. This regenerates /boot/grub/grub.cfg. You might want to look in there and see if things look ok (especially that the kernel image line includes boot=zfs and rpool=pc_rpool)

# update-grub

Finally, rebuild the initramfs images.

# update-initramfs -c -k all

Cleanup

If everything goes OK, your new system should be in a bootable state. Note that it took me several tries, multiple days, and reading through dozens of online articles to develop these steps to the point where my system would boot. If something goes wrong, just boot back into your live USB/DVD, re-install zfs modules, re-mount the crypto disks, import the zfs volumes to /mnt/root, make the bind mounts to proc,sys,dev as before, chroot in, and try again.

Once you're done  in the chroot, exit out of it, and try the following before rebooting. Odds are the dist-upgrade caused a bunch of annoying processes to start, and you won't be able to unmount everything. It's worth a shot anyway, but it turns out successful unmounting isn't critical.

# umount /mnt/root/proc /mnt/root/dev /mnt/root/sys /mnt/root/boot/efi /mnt/root/boot
# zpool export pc-dpool
# zpool export pc-rpool

First boot

Now you can reboot. At grub, make sure you choose your custom config entry. After you type your password in several times, your system should come up. The one thing that wont come up properly is any non-root ZFS pools. In my case, this means /home. You probably know that it's pretty hard to log into the UI with no home directory. The reason this happens is that zfs will think all the zpools were still being used by a different system. The root pool is forcibly imported, so that one comes up fine. To fix this problem, use control-shift-F1 to get to a console login prompt, log in as your user, then sudo -i to drop to root. Now you just have to forcibly import your pools, re-generate your initramfs (which includes a zfs cache file with pool ownership info), and reboot. After your second boot, all your pools should be mounted successfully.

# zpool import -f pc-dpool
# update-initramfs -c -k all
# reboot

Finishing touches

Hopefully at this point, you're booted into your fully-functional system. At this point, hibernation should work fine with the encrypted swap. That said, there are still a couple of missing features that I promised you at the beginning. These are:

  1. update-grub fails without help, because those symlinks were destroyed
  2. mdadm.conf contains no arrays, which causes a warning when updating initramfs
  3. Want only 1 password at boot time; currently have to enter same password 5 times
  4. If disk #1 dies, grub will die with it and system will require a live usb to recover

1 - update-grub fix

The most important thing to do is fix update-grub, because otherwise system updates might break the boot. I did some looking around at the best way to do this, and came across two main options. Either create an init script to create symbolic links, or create/modify a udev script. I chose to modify the udev script that creates the /dev/mapper/crypt* links in the first place. To do this, edit /lib/udev/rules.d/55-dm.rules and replace the following line:

ENV{DM_UDEV_DISABLE_DM_RULES_FLAG}!="1", ENV{DM_NAME}=="?*", SYMLINK+="mapper/$env{DM_NAME}"

With this line:

ENV{DM_UDEV_DISABLE_DM_RULES_FLAG}!="1", ENV{DM_NAME}=="?*", SYMLINK+="mapper/$env{DM_NAME} $env{DM_NAME}"

Re-process the udev rules and update grub as follows:

# udevadm control --reload-rules
# udevadm trigger
# update-grub

2 - mdadm no arrays message

When you updated your initramfs, you probably noticed a warning like the following:

# update-initramfs -c -k all
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays

Easily fixed by running the following command (from here):

# mdadm --detail --scan >> /etc/mdadm/mdadm.conf

3 - Single password

It's definitely annoying to have to type the password in 5 times at boot. To address this, I adapted the instructions in the encryptedZfs article linked at the beginning of this page. Basically, the user unlocks the crypt_swap partition (which is raid 1 across both hard drives!). The system then generates a derived key from the crypt_swap master key, which it uses to unlock the remaining devices. It's worth noting that this is, strictly speaking, less secure than just a password because EITHER the password OR the derived key could be used to unlock the crypto partitions. This doesn't bother me too much though, because if someone can get root access to get the derived key, he can just get all the data on the system anyway.

# /lib/cryptsetup/scripts/decrypt_derived crypt-swap > /root/derived.key
# cryptsetup luksAddKey /dev/disk/by-uuid/a4899072-54f8-45b4-ae89-ad870a12ff50 /root/derived.key
# cryptsetup luksAddKey /dev/disk/by-uuid/c79e2f23-90a8-4935-a485-930891d80e5e /root/derived.key
# cryptsetup luksAddKey /dev/disk/by-uuid/1dee6dae-8684-46d6-a932-eaba5e32f6ac /root/derived.key
# cryptsetup luksAddKey /dev/disk/by-uuid/fab2b777-cd44-4358-8606-55a6b8af13ba /root/derived.key

Copy the included decrypt_derived script to the initramfs-tools folder:

# cp /lib/cryptsetup/scripts/decrypt_derived /etc/initramfs-tools/scripts/luks/get.crypt-swap.decrypt_derived

Edit the decrypt_derived script:

# vi /etc/initramfs-tools/scripts/luks/get.crypt-swap.decrypt_derived

Now that the script is open with vi, need to edit it to not take an argument. Add the following near the top of the file:

CRYPT_DEVICE=crypt-swap

Replace all instances of $1 with $CRYPT_DEVICE using vi's find and replace mechanism

:1,$s/\$1/\$CRYPT_DEVICE/g

Finally, regenerate the boot images

# update-initramfs -c -k all
# update-grub

Now you should only need to enter the password for the crypt_swap device at boot time.

4 - Fault tolerance.

I still need to do some more work here. I'm finding that mdadm is throwing a fit if either the primary or secondary drives are disconnected and refuses to automatically assemble the md0swap device in degraded mode. Without this, there's no crypt_swap, and no booting system.

What I think needs to happen:

  • Figure out how to make mdadm assemble the swap in degraded mode without input.
  • Duplicate boot and boot/efi on the second drive.

Closing thoughts

This system took forever to get going, but I really had a good time. I think if I had to do this again, I'd probably lump all of the zfs pools into just one pool, so that I didn't have to specifically allocate space for root and data. I'm still on the fence, though.

Do make it a point to make a backup copy of your LUKS headers, just in case.

10 thoughts on “Linux Mint ZFS root, full disk encryption, hibernation, encrypted swap

  1. ben
    Phenomenal effort!
    This describes my ideal system, but I think I'll wait for things to be a bit less hands on to follow you. I used a keyfile to unlock a couple of luks ext4 external HDDs, but was disappointed that a live USB can get everything if the / is unencrypted. My more modest goal is mint, ext4 with everything but the boot locked, using efi. Anyway, good work and thanks for opening your findings to the community : )
    1. Combustible Post author
      Thanks for the comment! It's definitely a bit hands-on, but since I set things up several months ago I haven't had to do anything to maintain it. I always cross my fingers when I do a dist-upgrade, but so far so good.
  2. cfidicus
    I can't fully express my gratitude to you for this careful, step-wise explanation. I've been searching the web and studying for weeks on how to install ZoL on Linux Mint 17.1. It's been very confusing and frustrating. Yours is the most clear description that I've found. Thank you. In my case I have no need for encryption and will indeed lump all of the zfs pools, root and data that is, into just one pool, for the very reason you mention. Best regards.
    1. Combustible Post author
      Awesome! I'm glad you were able to get some use out of this. I too had a tough time getting going because most of the other articles I found required debootstrap.
  3. JMG
    I was very far into doing this with Gentoo when updating grub failed. Weeks since having given up, I found your solution via the symlinks and udev rule. It seems I'll be back at it for another try. Thanks.
    1. Combustible Post author
      Cool, glad I could help. Good luck!
  4. Enneamer
    May I ask whether you tried to set up swap on a ZVOL? While ZoL Github has several issues related to this, most of them haven't got informative updates for years. I am curious about how the current stage is.
    1. Combustible Post author
      I've never tried using ZFS for swap space, though it would probably be a good idea. Actually, I hadn't really thought about it too much, but in retrospect it seems like allowing memory (which might eventually be written to disk permanently) to swap to non-integrity protected media is probably a bad idea (I'm currently just using mdadm raid 1 for my swap partition). Maybe when the next Ubuntu or Linux Mint LTS comes out I'll try to get the swap to use ZFS too.
  5. JoKo
    Hello, Could you comment a bit more on hibernation? I would like to use ZFS for my laptop and hibernation is a must for me. However, issues like the one posted at https://github.com/zfsonlinux/zfs/issues/4701 (the date of the issue is May 26th, 2016), worry me. Are there still issues with ZFS and hibernation? Regards, JoKo
    1. Combustible Post author
      Hi JoKo, It's entirely possible that *new* issues with hibernation have arisen since I wrote this post almost exactly two years ago of which I am unaware. However, when I did write this post, hibernation was working fine when using a dedicated swap partition that was not a ZFS partition. In my case, I was using mdadm software raid 1 to get at least a bit of resilience. I still have not migrated my swap to a ZFS partition to test if this breaks things. Probably the only way to really answer your question is to try it out on a spare or virtual machine. Good luck! Byron

Comments