UEFI#
This guide can be used to install Void onto a single disk with with or without ZFS encryption.
It assumes the following:
Your system uses UEFI to boot
Your system is x86_64
You will use
glibc
as your system libc.You're mildly comfortable with ZFS, EFI and discovering system facts on your own (
lsblk
,dmesg
,gdisk
, ...)
ZFSBootMenu does not require glibc
and is not restricted to x86_64
. If you are comfortable installing Void Linux
on other architectures or with the musl
libc, you can adapt the instructions here to your desired configuration.
Download the latest hrmpf, write it to USB drive and boot your system in EFI mode.
Confirm EFI support:
# dmesg | grep -i efivars [ 0.301784] Registered efivars operations
Configure Live Environment#
Source /etc/os-release
#
The file /etc/os-release defines variables that describe the running distribution. In particular, the $ID variable defined within can be used as a short name for the filesystem that will hold this installation.
source /etc/os-release
export ID
Generate /etc/hostid
#
zgenhostid -f 0x00bab10c
Define disk variables#
For convenience and to reduce the likelihood of errors, set environment variables that refer to the devices that will be configured during the setup.
For many users, it is most convenient to place boot files (i.e., ZFSBootMenu and any loader responsible for launching it) on the the same disk that will hold the ZFS pool. However, some users may wish to dedicate an entire disk to the ZFS pool or create a multi-disk pool. A USB flash drive provides a convenient location for the boot partition. Fortunately, this alternative configuration is easily realized by simply defining a few environment variables differently.
Verify your target disk devices with lsblk
. /dev/sda
, /dev/sdb
and /dev/nvme0n1
used below are examples.
First, define variables that refer to the disk and partition number that will hold boot files:
export BOOT_DISK="/dev/sda"
export BOOT_PART="1"
export BOOT_DEVICE="${BOOT_DISK}${BOOT_PART}"
export BOOT_DISK="/dev/nvme0n1"
export BOOT_PART="1"
export BOOT_DEVICE="${BOOT_DISK}p${BOOT_PART}"
export BOOT_DISK="/dev/sdb"
export BOOT_PART="1"
export BOOT_DEVICE="${BOOT_DISK}${BOOT_PART}"
Next, define variables that refer to the disk and partition number that will hold the ZFS pool:
export POOL_DISK="/dev/sda"
export POOL_PART="2"
export POOL_DEVICE="${POOL_DISK}${POOL_PART}"
export POOL_DISK="/dev/nvme0n1"
export POOL_PART="2"
export POOL_DEVICE="${POOL_DISK}p${POOL_PART}"
export POOL_DISK="/dev/sda"
export POOL_PART="1"
export POOL_DEVICE="${POOL_DISK}${POOL_PART}"
Disk preparation#
Wipe partitions#
wipefs -a "$POOL_DISK"
wipefs -a "$BOOT_DISK"
sgdisk --zap-all "$POOL_DISK"
sgdisk --zap-all "$BOOT_DISK"
zpool labelclear -f "$POOL_DISK"
Create EFI boot partition#
sgdisk -n "${BOOT_PART}:1m:+512m" -t "${BOOT_PART}:ef00" "$BOOT_DISK"
Create zpool partition#
sgdisk -n "${POOL_PART}:0:-10m" -t "${POOL_PART}:bf00" "$POOL_DISK"
ZFS pool creation#
Create the zpool#
zpool create -f -o ashift=12 \
-O compression=lz4 \
-O acltype=posixacl \
-O xattr=sa \
-O relatime=on \
-o autotrim=on \
-o compatibility=openzfs-2.1-linux \
-m none zroot "$POOL_DEVICE"
echo 'SomeKeyphrase' > /etc/zfs/zroot.key
chmod 000 /etc/zfs/zroot.key
zpool create -f -o ashift=12 \
-O compression=lz4 \
-O acltype=posixacl \
-O xattr=sa \
-O relatime=on \
-O encryption=aes-256-gcm \
-O keylocation=file:///etc/zfs/zroot.key \
-O keyformat=passphrase \
-o autotrim=on \
-o compatibility=openzfs-2.1-linux \
-m none zroot "$POOL_DEVICE"
Note
It's out of the scope of this guide to cover all of the pool creation options used - feel free to tailor them to suit your system. However, the following options need to be addressed:
encryption=aes-256-gcm
- You can adjust the algorithm as you see fit, but this will likely be the most performant on modern x86_64 hardware.keylocation=file:///etc/zfs/zroot.key
- This sets our pool encryption passphrase to the file/etc/zfs/zroot.key
, which we created in a previous step. This file will live inside your initramfs stored on the ZFS boot environment.keyformat=passphrase
- By setting the format topassphrase
, we can now force a prompt for this inzfsbootmenu
. It's critical that your passphrase be something you can type on your keyboard, since you will need to type it in to unlock the pool on boot.
Note
The option -o compatibility=openzfs-2.1-linux
ensures that the pool is created only with feature flags supported by the current ZFSBootMenu binary release. If you plan on building a custom ZFSBootMenu image that you will keep synchronized with your host, the compatibility option may be omitted.
Binary releases of ZFSBootMenu are generally built with the latest stable version of ZFS. Future releases of ZFSBootMenu may therefore support newer feature sets. Check project release notes prior to updating or removing compatibility options and upgrading your system pool.
Create initial file systems#
zfs create -o mountpoint=none zroot/ROOT
zfs create -o mountpoint=/ -o canmount=noauto zroot/ROOT/${ID}
zfs create -o mountpoint=/home zroot/home
zpool set bootfs=zroot/ROOT/${ID} zroot
Note
It is important to set the property canmount=noauto
on any file systems with mountpoint=/
(that is, on
any additional boot environments you create). Without this property, the OS will attempt to automount all ZFS file
systems and fail when multiple file systems attempt to mount at /
; this will prevent your system from booting.
Automatic mounting of /
is not required because the root file system is explicitly mounted in the boot process.
Also note that, unlike many ZFS properties, canmount
is not inheritable. Therefore, setting canmount=noauto
on
zroot/ROOT
is not sufficient, as any subsequent boot environments you create will default to canmount=on
. It is
necessary to explicitly set the canmount=noauto
on every boot environment you create.
Export, then re-import with a temporary mountpoint of /mnt
#
zpool export zroot
zpool import -N -R /mnt zroot
zfs mount zroot/ROOT/${ID}
zfs mount zroot/home
zpool export zroot
zpool import -N -R /mnt zroot
zfs load-key -L prompt zroot
zfs mount zroot/ROOT/${ID}
zfs mount zroot/home
Verify that everything is mounted correctly#
# mount | grep mnt zroot/ROOT/void on /mnt type zfs (rw,relatime,xattr,posixacl) zroot/home on /mnt/home type zfs (rw,relatime,xattr,posixacl)
Update device symlinks#
udevadm trigger
Install Void#
Adjust the mirror, libc, and package selection as you see fit.
XBPS_ARCH=x86_64 xbps-install \
-S -R https://mirrors.servercentral.com/voidlinux/current \
-r /mnt base-system
Copy our files into the new install#
cp /etc/hostid /mnt/etc
cp /etc/hostid /mnt/etc
mkdir /mnt/etc/zfs
cp /etc/zfs/zroot.key /mnt/etc/zfs
Chroot into the new OS#
xchroot /mnt
Basic Void configuration#
Set the keymap, timezone and hardware clock#
cat << EOF >> /etc/rc.conf
KEYMAP="us"
HARDWARECLOCK="UTC"
EOF
ln -sf /usr/share/zoneinfo/<timezone> /etc/localtime
Configure your glibc locale#
Note
This does not need to be done on musl, as musl does not have system locale support.
cat << EOF >> /etc/default/libc-locales
en_US.UTF-8 UTF-8
en_US ISO-8859-1
EOF
xbps-reconfigure -f glibc-locales
Set a root password#
passwd
ZFS Configuration#
Configure Dracut to load ZFS support#
cat << EOF > /etc/dracut.conf.d/zol.conf
nofsck="yes"
add_dracutmodules+=" zfs "
omit_dracutmodules+=" btrfs "
EOF
cat << EOF > /etc/dracut.conf.d/zol.conf
nofsck="yes"
add_dracutmodules+=" zfs "
omit_dracutmodules+=" btrfs "
install_items+=" /etc/zfs/zroot.key "
EOF
Install ZFS#
xbps-install -S zfs
Prepare for first boot#
Exit the chroot, unmount everything#
exit
umount -n -R /mnt
Export the zpool and reboot#
zpool export zroot
reboot