ZFS, BTRFS, XFS, EXT4 and LVM with KVM – a storage performance comparison

Written by Gionatan Danti on . Posted in Virtualization

User Rating:  / 73
PoorBest 

Virtual machines storage performance is a hot topic – after all, one of the main problem when virtualizing many OS instances is to correctly size the I/O subsystem, both in term of space and speed.

Of course performance is not the only thing to consider: another big role is played by flexibility and ease to use/configure. So the perfect storage subsystem is the right compromise between performance, flexibility and ease to use.

In a somewhat ironic manner, of the three requisites written above, performance is the most difficult thing to measure, as“I/O speed” is an ephemeral concept. It is impossible to speak about I/O performance without taking into account three main parameters: I/O block size, I/O per seconds (IOPS) and queue depth (the number of outstanding, concurrent block requests). This represent the first problem to correctly size your disk subsystem: you had to guess about the expected access pattern, and you better guess right.

Another pitfall is how to provision (allocate) space inside the configured storage subsystem: it is better to use Fat or Thin provisioning? What are the performance implications?

At the same time, flexibility and ease to use are easy things to sell: I often read of how modern, CoW filesystems as BTRFS and ZFS have plenty of features and of how many peoples recommend to use them for performance-sensitive tasks as virtual machines storage.

Sure BTRFS and ZFS have a host of slick features, but are they really suited for storing “live” virtual machines data, or our beloved legacy filesystems as EXT4 and XFS are better suited for this task? Or what about going a layer down, directly playing with Logical Volumes?

I'll try to answer these questions. Anyway, please keep in mind that the result of any benchmark is of relative values – I don't pretend to elect the uber-mega-super-duper I/O configuration. I only hope to help you selecting the right tool for the right job.

Comments   

 
#11 capsicum 2016-02-14 03:42
What are the structural details of the thin LVM arrangement? The KVM information I have gives a warning that thin provisioning is not possible with LVM pools. I am new to KVM and VMs, but I do know traditional LVM structure (Pv, Vg, , Lv or thin-Lv , fs)
 
 
#12 Albert Henriksen 2016-02-15 21:40
In my own tests, BTRFS performance is more than 180 times faster if you do the following:

- Disable COW on the folder containing VM image files (to reduce write amplification)
- Disable QCOW2 and use sparse RAW for VM image files (to reduce fragmentation of extents apparently caused by QCOW2 block mapping algorithm)

Both tests were on a Linux 4.2 kernel. The QCOW2 cluster size was 64K in the test using QCOW2. I only tested with COW disabled. The performance difference is likely even greater with NOCOW + RAW versus COW + QCOW2.

To convert VM images, the following commands are useful:
$ chattr +C new_images/
$ truncate -s 100G new_images/vm1.raw
$ qemu-nbd -c /dev/nbd0 old_images/vm1.qcow2
$ dd conv=notrunc,sp arse bs=4M if=old_images/v m1.qcow2 of=new_images/vm1.raw

Shut down virtual machines before conversion, change XML to point to new files and restart virtual machines when done.
 
 
#13 mt 2016-03-03 11:17
Quoting Albert Henriksen:
In my own tests, BTRFS performance is more than 180 times faster if you do the following:

- Disable COW on the folder containing VM image files (to reduce write amplification)
- Disable QCOW2 and use sparse RAW for VM image files (to reduce fragmentation of extents apparently caused by QCOW2 block mapping algorithm)


But that makes btrfs useless. No snapshots, no checksumming. It's fair to test with CoW - do you have any numbers for that?
 
 
#14 Sam 2016-05-23 00:54
Hello,

I'm taking it you forgot to mount BTRFS with compression enabled (which really should be the default)?

Can you please test BTRFS and mount sure you're mounting with the compress=lzo option ?
 
 
#15 Sam 2016-05-23 00:58
Also just saw your note about Kernel 3.10! - we run many hundreds of VMs and not a single production server is running a kernel this old, we run between 4.4 and 4.6 on CentOS 7.

QCOW2 is also a very suboptimal for modern VMs, in reality you'd always use raw devices or logical volumes.

It would be interesting to see you re-run these tests using a modern kernel, say at least 4.4 and either raw block devices or logical volumes along with mounting BTRFS properly with the compress=lzo option
 
 
#16 Luca 2016-05-23 23:28
Great article, but pagination makes it painful to read
 
 
#17 Gionatan Danti 2016-05-24 15:22
@Sam

No, I did not use any compression (which, by the way, was disabled by default).

I stick to distibution-pro vided kernels when possible, and 3.10.x is the current kernel for RHEL7/CentOS7.

Finally, I agree that RAW images are marginally faster than preallocated QCOW2 files, and when possibile I used them. However, for the block layer/filesyste m combo which does not support snapshots, I used QCOW2 to have at least partial feature parity with the more flexible alternatives.
 
 
#18 Yonsy Solis 2016-05-30 16:28
ok, do you try to use distribution provided kernels when possible, but when you integrate a filesystem from and external module (ZFS from ZFS on Linux) and another filesystem from the provided Kernel (BTRFS) with the old characteristics from this, your camparison get invalid.

ZFS get more updates without upgrade the kernel. This is not the case with BTRFS that need updated kernel. The kernel version is important to know in this case (and will need to be updated to a comparison used in Enterprise distributions, Uubntu 16.04 LTS for example implements 4.4 kernel now)
 
 
#19 Brian Candler 2016-12-15 09:37
For "raw images ZFS", do you mean you created a zvol block device, or a raw .img file sitting in a zfs dataset (filesystem)?
 
 
#20 Gionatan Danti 2016-12-15 09:52
Quoting Brian Candler:
For "raw images ZFS", do you mean you created a zvol block device, or a raw .img file sitting in a zfs dataset (filesystem)?


The latter: raw images on a ZFS filesystem
 

You have no rights to post comments