Welcome!

Linux Containers Authors: JT Ripton, Elizabeth White, Pat Romanski, PagerDuty Blog, Derek Weeks

Related Topics: Linux Containers, Open Source Cloud, Containers Expo Blog

Linux Containers: Article

ZFS on Linux

How ZFS on Linux compares to ZFS on Illumos or FreeBSD

On March 27, 2013, ZoL maintainers announced that the 0.6.1 release was ready for wide scale deployment on everything from desktops to servers. Yet, due to lack of maturity and adoption of the ZoL project, maintainers and/or advocates of ZFS aren't comfortable to run ZoL in production yet.

The reason behind the reluctance to use ZoL in production is that ZFS on Solaris took a large number of years to reach maturity and went through ups and downs of bugs related to data corruption and other issues. In the same way, ZoL will need some time to mature as product. It will take about a year to be mature as more people deploy it in production. If developers want to take advantage of ZFS, they can start rolling out less important database servers (i.e. reporting servers, 3rd slave databases) into production and experience the product for about 6 months before rolling out to all database servers. This will give users the confidence and experience to work with ZoL. Alternatively, developers may want to run ZFS on OmniOS because it's been battle tested for decades now.

How ZFS on Linux Compares to ZFS on Illumos or FreeBSD
The implementation of ZFS on Linux when compared to running ZFS on Illumos or FreeBSD is not very different from the perspective of the system administrator. The management and general usage is nearly identical. The only differences are OS specific functionality. For example, on FreeBSD if a user wants to use a zvol for swap space, he/she sets the org.freebsd:swap=on property on the zvol to turn swap on. On Linux, a developer would create a vanilla zvol and set up swap like any other partition with mkswap and swapon. Under the latest versions of all three operating systems mentioned, the Zpool version is at the same level, which is to say based on zpool v28 with additional features added by way of feature flags. They are compatible, users can create a zpool on Illumos/OmniOS, use it, export it, move the disks to a FreeBSD server, import the zpool, use it, export it, move to a Linux server, import the zpool, use it, etc. This exact scenario is something we have done at OmniTI and it worked without a hitch. One issue however is that the ACL support/usability is different on each OS so you'd the user will likely have to clean up the permissions a bit.

Caveats for Running ZFS on Arch Linux in a Production Environment
ZFS under Arch Linux is not part of the main package repository. As ZFS and its utilities are maintained by a third party, developers must rely on the third party to keep the packages up to date. One issue is that every time a new kernel is released (frequently) the ZFS kernel modules must be rebuilt as well. If the company upgrades its system (pacman -Syu) and reboots, but the ZFS modules were not recompiled well, Zpools will not initialize. This becomes especially important when developers have the rootfs under ZFS since this would leave the system unbootable and the user would be forced to recover by means of Rescue CD or, in the case of AWS, moving the EBS volumes to another instance and recovering from there. Linux does have a mechanism for automating this process other than DKMS. However, the arch zfs-modules-dkms package that provides this functionality is not kept up to date, and shouldn't be used.

Also, as briefly mentioned above, it should be noted that one cannot boot directly from ZFS on Linux, users must maintain a Linux bootloader compatible file system for /boot such as ext[234].

Currently, many of the utilities that output information about the filesystem are not ZFS aware and developers can get strange results running commands such as "df" for example, since it does not know the relationship between datasets and their parents. These will not necessarily prevent anything from running, but it is worth noting. Generally it's best to use "zfs list" rather than "df" to get accurate results.

ZFS natively uses NFSv4-style ACLs and is not compatible with Posix-style ACLs. Any applications that rely on Posix-style ACLs will have issues. Default GNU utilities like "ls" for example are not NFSv4 ACL aware.

Lastly, the ZoL project proclaims that ZFS on Linux is production ready, however it is worthwhile to note that it is still very immature at this point. ZFS itself has been around and tested for quite some time and is mature, so be careful and test before using it in a production environment.

How To Install ZFS on Arch with RootFS on ZFS
The Arch Wiki page for Installing Linux on ZFS goes into great detail on how to install Linux on ZFS. The key points are as follows:

  • The ZFS utilities and kernel modules must be built/installed prior to beginning the installation (within the CD Boot environment)
  • Even though you can have the Root FS on ZFS, the Linux bootloaders cannot load the kernel from ZFS currently so you still need a small ext2/3 partition for /boot to hold the kernel, the initramfs, and files that the bootloader requires.
  • There is no "beadm" in Linux to support multiple Boot Environment snapshots currently. One of the benefits of ZFS on Illumos/OmniOS is the ability to rollback to an earlier boot environment when applying updates.
  • When building the initramfs image, the zfs hook must come before the filesystems hook and you should not use the fsck hook at all.
  • You need to enable the ZFS service in systemd as this is not enabled by default. Under a ZFS Root system, this is very important if you like your systems to boot.
  • The "kernel" line of the bootloader needs to include a parameter telling the kernel where the root FS resides. For example, if the root FS is on a Zpool named "rpool" and its dataset is rpool/ROOT/default, then this parameter would be zfs=rpool/ROOT/default.

It is important to remember to export the zpool prior to rebooting after installation otherwise ZFS will complain that the system is different and will not import itself. This is because the new system is, in fact, a different system than the CD Media boot environment. Also, it's a very good idea to rebuild the initramfs (mkinitcpio -p linux) right away once you log into the installed system for the first time to avoid any "pool may be in use" errors due to differences in the CD Media Boot environment when the ramdisk was created initially.

Included at the end of this article are portions of a script used to build ArchLinux on ZFS. The only parts that have been removed are things that are specific to my environment. This is given as an example only to illustrate the steps that can be used, but note that it may or may not match the methods typically used for your environment.

Referencesz
ZFS on Linux Main Page: http://zfsonlinux.org

Arch Wiki - Installing Linux on ZFS: https://wiki.archlinux.org/index.php/Installing_Arch_Linux_on_ZFS

Arch Wiki - ZFS: https://wiki.archlinux.org/index.php/ZFS

Appendix A - Excerpts from Vagrant install script

The following are the commands we use when installing ZFS on ArchLinux under Vagrant. The Vagrant specific bits have been removed as they would not apply for installation on a production server. The full script can be found here: https://github.com/Loki22/scripts/blob/master/Vagrant/archzfs_vagrant_install.sh

pacman -Syy

pacman -S --noconfirm base-devel

mkdir /root/build

cd /root/build

wget https://aur.archlinux.org/packages/sp/spl-utils/spl-utils.tar.gz

wget https://aur.archlinux.org/packages/sp/spl/spl.tar.gz

wget https://aur.archlinux.org/packages/zf/zfs-utils/zfs-utils.tar.gz

wget https://aur.archlinux.org/packages/zf/zfs/zfs.tar.gz

for i in spl-utils spl zfs-utils zfs

do

cd /root/build && tar zxvf ${i}.tar.gz

cd /root/build/${i}

makepkg -s --asroot --noconfirm && pacman -U --noconfirm ./${i}*.pkg.tar.xz

done

# Install packages needed for ZFS

pacman -S --noconfirm archzfs dosfstools gptfdisk

# Clear the disk and initialize in GPT Format

sgdisk -o -g /dev/sda

# Partitioning - 3 Partitions (BIOS Boot Partition, /boot, and ZFS)

sgdisk -n 2:2048:+512M -c 2:"Linux Boot Partition" -t 2:8300 /dev/sda

sgdisk -n 3:0:0 -c 3:"ZFS Root Pool" -t 3:bf00 /dev/sda

sgdisk -n 1:34:2047 -c 1:"BIOS Boot Partition" -t 1:ef02 /dev/sda

# Create filesystem for /boot partition

mkfs.ext4 -L BOOT /dev/sda2

# Set up the ZFS Root Pool

modprobe zfs

zpool create rpool /dev/sda3

zfs set checksum=fletcher4 rpool

zfs set atime=off rpool

zfs set compression=lzjb rpool

zfs set mountpoint=none rpool

zpool export rpool

zpool import -d /dev/disk/by-id -R /mnt rpool

# Set up the initial BE (linux doesn't have beadm at this point, but not a bad idea to think ahead)

zfs create rpool/ROOT

zfs create -o mountpoint=/ rpool/ROOT/default

zpool set bootfs=rpool/ROOT/default rpool

# Set up datasets that are not part of the BE

zfs create -o mountpoint=/home -o setuid=off rpool/home

zfs create -o mountpoint=/root -o setuid=off rpool/roothome

# Create swap (example here is 2GB, use 4K block size for 64 bit systems)

zfs create -V 2G -b 4K rpool/swap

mkswap -Lswap -f /dev/rpool/swap

swapon /dev/rpool/swap

# Mount /boot

mkdir /mnt/boot

mount /dev/sda2 /mnt/boot

# Change ZFS repo to core now that we have it installed. This is so the new system will use updated modules linked to the new kernel as opposed to the somewhat more stale kernel that is used on the Install CD.

sed -i 's/demz-repo-archiso/demz-repo-core/' /etc/pacman.conf

# Bootstrap the new installation

pacstrap /mnt base base-devel archzfs sudo gnupg vim

# Generate the fstab minus the ZFS bits of which mounting is handled by ZFS

genfstab -U -p /mnt | grep boot >> /mnt/etc/fstab

# Configuration

CHROOT="arch-chroot /mnt"

# Hostname

echo "myhostname" > /mnt/etc/hostname

# Timezone and Clock

ln -s /usr/share/zoneinfo/America/New_York /mnt/etc/localtime

hwclock --systohc --utc

# Locale

sed -i 's/^#\(en_US.*\)/\1/' /mnt/etc/locale.gen

$CHROOT locale-gen

echo 'LANG="en_US.UTF-8"' > /mnt/etc/locale.conf

# Keymap

echo "KEYMAP=us" > /mnt/etc/vconsole.conf

# Mkinitcpio

sed -i 's/^\(HOOKS.*\)filesystems keyboard fsck/\1keyboard zfs filesystems/' /mnt/etc/mkinitcpio.conf

$CHROOT mkinitcpio -p linux

# Enable ZFS at boot

$CHROOT systemctl enable zfs.service

# Install GRUB

$CHROOT pacman -S --noconfirm grub-bios

modprobe dm-mod

$CHROOT grub-install --target=i386-pc --recheck --debug /dev/sda

cp /mnt/usr/share/locale/en\@quot/LC_MESSAGES/grub.mo /mnt/boot/grub/locale/en.mo

mv /mnt/boot/grub/grub.cfg /mnt/boot/grub/grub.cfg.orig

cat > /mnt/boot/grub/grub.cfg <<EOF

set timeout=2

set default=0

# (0) Arch Linux

menuentry "Arch Linux" {

set root=(hd0,2)

linux /vmlinuz-linux zfs=rpool/ROOT/default

initrd /initramfs-linux.img

}

# (1) Arch Linux (fallback)

menuentry "Arch Linux - Fallback" {

set root=(hd0,2)

linux /vmlinuz-linux zfs=rpool/ROOT/default

initrd /initramfs-linux-fallback.img

}

EOF

# SSH

$CHROOT pacman -S --noconfirm openssh

ln -s '/usr/lib/systemd/system/sshd.service' \

'/mnt/etc/systemd/system/multi-user.target.wants/sshd.service'

# Networking on installed system

# Manual linking because systemd isn't running yet

# Run 'ip link' to check the network interface and make sure it's enp0s3

ln -s '/usr/lib/systemd/system/dhcpcd@.service' \

[email protected]rvice'

# Clean up

# Remove downloaded packages

$CHROOT pacman -Scc --noconfirm

# Set your root password

passwd root

# Unmount filesystems, change ZFS mountpoints, and reboot

umount /mnt/boot

zfs umount -a

zpool export rpool

echo "If there were no errors, it would now be safe to reboot into the new system."

Appendix B - Recovery process if ZFS modules are not rebuilt on kernel upgrade
As mentioned above, the ZFS modules need to be rebuilt on every kernel upgrade. If this isn't done, you need to recover from a rescue environment. The recovery process (assuming booting from CD) is to build the ZFS modules/utils from the AUR (spl-utils, spl, zfs-utils, and zfs) in the temporary rescue environment, loading the ZFS module, mounting the Zpool under /mnt, mounting the /boot FS at /mnt/boot, chrooting, building the ZFS modules/utils again against the kernel in the chroot environment, rebuilding initramfs (mkinitcpio -p linux), and rebooting. Needless to say, not fun while people are screaming at you because the production server is down. This problem will be alleviated at some point when the ZFS packages are adopted into the main repositories and maintained with the rest of the release process.

More Stories By Kevin Loukinen

Kevin Loukinen is Site Reliability Engineer at OmniTI. Prior to that, he worked both as a Systems Administrator and Network Administrator for more than 12 years across several industries (financial, government and telecommunications).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Identity is in everything and customers are looking to their providers to ensure the security of their identities, transactions and data. With the increased reliance on cloud-based services, service providers must build security and trust into their offerings, adding value to customers and improving the user experience. Making identity, security and privacy easy for customers provides a unique advantage over the competition.
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
SYS-CON Events announced today that Pulzze Systems will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Pulzze Systems, Inc. provides infrastructure products for the Internet of Things to enable any connected device and system to carry out matched operations without programming. For more information, visit http://www.pulzzesystems.com.
There is growing need for data-driven applications and the need for digital platforms to build these apps. In his session at 19th Cloud Expo, Muddu Sudhakar, VP and GM of Security & IoT at Splunk, will cover different PaaS solutions and Big Data platforms that are available to build applications. In addition, AI and machine learning are creating new requirements that developers need in the building of next-gen apps. The next-generation digital platforms have some of the past platform needs a...
Data is an unusual currency; it is not restricted by the same transactional limitations as money or people. In fact, the more that you leverage your data across multiple business use cases, the more valuable it becomes to the organization. And the same can be said about the organization’s analytics. In his session at 19th Cloud Expo, Bill Schmarzo, CTO for the Big Data Practice at EMC, will introduce a methodology for capturing, enriching and sharing data (and analytics) across the organizati...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
SYS-CON Events announced today Telecom Reseller has been named “Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
SYS-CON Events announced today that Adobe has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. Adobe is changing the world though digital experiences. Adobe helps customers develop and deliver high-impact experiences that differentiate brands, build loyalty, and drive revenue across every screen, including smartphones, computers, tablets and TVs. Adobe content solutions are used daily by millions of co...
Why do your mobile transformations need to happen today? Mobile is the strategy that enterprise transformation centers on to drive customer engagement. In his general session at @ThingsExpo, Roger Woods, Director, Mobile Product & Strategy – Adobe Marketing Cloud, covered key IoT and mobile trends that are forcing mobile transformation, key components of a solid mobile strategy and explored how brands are effectively driving mobile change throughout the enterprise.
Pulzze Systems was happy to participate in such a premier event and thankful to be receiving the winning investment and global network support from G-Startup Worldwide. It is an exciting time for Pulzze to showcase the effectiveness of innovative technologies and enable them to make the world smarter and better. The reputable contest is held to identify promising startups around the globe that are assured to change the world through their innovative products and disruptive technologies. There w...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
Almost two-thirds of companies either have or soon will have IoT as the backbone of their business in 2016. However, IoT is far more complex than most firms expected. How can you not get trapped in the pitfalls? In his session at @ThingsExpo, Tony Shan, a renowned visionary and thought leader, will introduce a holistic method of IoTification, which is the process of IoTifying the existing technology and business models to adopt and leverage IoT. He will drill down to the components in this fra...
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi...
Cloud computing is being adopted in one form or another by 94% of enterprises today. Tens of billions of new devices are being connected to The Internet of Things. And Big Data is driving this bus. An exponential increase is expected in the amount of information being processed, managed, analyzed, and acted upon by enterprise IT. This amazing is not part of some distant future - it is happening today. One report shows a 650% increase in enterprise data by 2020. Other estimates are even higher....
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Smart Cities are here to stay, but for their promise to be delivered, the data they produce must not be put in new siloes. In his session at @ThingsExpo, Mathias Herberts, Co-founder and CTO of Cityzen Data, will deep dive into best practices that will ensure a successful smart city journey.
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...