ZFS, BTRFS, XFS, EXT4 and LVM with KVM – a storage performance comparison

Written by Gionatan Danti on . Posted in Virtualization

User Rating:  / 77
PoorBest 

Conclusions

Well, after so many pages, what lessons can we learn?

If you need maximum, stable performance, even in the face of a lesser flexibility and ease to use, go with classical LVM volumes. My performance builds where based on this LVM scheme and I am very pleased of how they runs. However, do not forget to make a judicious use of snapshots, as their legacy implementation is quite slow.

If you are ready to use a somewhat newer and more flexible (albeit less tested) technology, ThinLVM is your prime candidate, especially when coupled with a host-side filesystem: with this later arrangement you can disable zeroing without too much security concerns. Snapshots are quite fast, also. Only remember that fragmentation can somewhat slow-down you system over time.

A similar consideration can be done for ZFS on Linux, which shows very good results. You will definitely like its advanced features, most notably compression (with LZ4 compressor). But if you are thinking to enable on-line deduplication, please think twice: due to how it is implemented, it both require enormous amount of RAM and often give you low space saving (but this is good for another article...)

For VMs storage, stay well away from BTRFS: not only it is marked a “Tech Preview” from RedHat (read: not 100% production ready), but it is very slow when used as a VM images store.

I hope you found this article interesting. If you want, you can email me at This email address is being protected from spambots. You need JavaScript enabled to view it.

Have a nice day!

Comments   

 
#11 capsicum 2016-02-14 03:42
What are the structural details of the thin LVM arrangement? The KVM information I have gives a warning that thin provisioning is not possible with LVM pools. I am new to KVM and VMs, but I do know traditional LVM structure (Pv, Vg, , Lv or thin-Lv , fs)
 
 
#12 Albert Henriksen 2016-02-15 21:40
In my own tests, BTRFS performance is more than 180 times faster if you do the following:

- Disable COW on the folder containing VM image files (to reduce write amplification)
- Disable QCOW2 and use sparse RAW for VM image files (to reduce fragmentation of extents apparently caused by QCOW2 block mapping algorithm)

Both tests were on a Linux 4.2 kernel. The QCOW2 cluster size was 64K in the test using QCOW2. I only tested with COW disabled. The performance difference is likely even greater with NOCOW + RAW versus COW + QCOW2.

To convert VM images, the following commands are useful:
$ chattr +C new_images/
$ truncate -s 100G new_images/vm1.raw
$ qemu-nbd -c /dev/nbd0 old_images/vm1.qcow2
$ dd conv=notrunc,sp arse bs=4M if=old_images/v m1.qcow2 of=new_images/vm1.raw

Shut down virtual machines before conversion, change XML to point to new files and restart virtual machines when done.
 
 
#13 mt 2016-03-03 11:17
Quoting Albert Henriksen:
In my own tests, BTRFS performance is more than 180 times faster if you do the following:

- Disable COW on the folder containing VM image files (to reduce write amplification)
- Disable QCOW2 and use sparse RAW for VM image files (to reduce fragmentation of extents apparently caused by QCOW2 block mapping algorithm)


But that makes btrfs useless. No snapshots, no checksumming. It's fair to test with CoW - do you have any numbers for that?
 
 
#14 Sam 2016-05-23 00:54
Hello,

I'm taking it you forgot to mount BTRFS with compression enabled (which really should be the default)?

Can you please test BTRFS and mount sure you're mounting with the compress=lzo option ?
 
 
#15 Sam 2016-05-23 00:58
Also just saw your note about Kernel 3.10! - we run many hundreds of VMs and not a single production server is running a kernel this old, we run between 4.4 and 4.6 on CentOS 7.

QCOW2 is also a very suboptimal for modern VMs, in reality you'd always use raw devices or logical volumes.

It would be interesting to see you re-run these tests using a modern kernel, say at least 4.4 and either raw block devices or logical volumes along with mounting BTRFS properly with the compress=lzo option
 
 
#16 Luca 2016-05-23 23:28
Great article, but pagination makes it painful to read
 
 
#17 Gionatan Danti 2016-05-24 15:22
@Sam

No, I did not use any compression (which, by the way, was disabled by default).

I stick to distibution-pro vided kernels when possible, and 3.10.x is the current kernel for RHEL7/CentOS7.

Finally, I agree that RAW images are marginally faster than preallocated QCOW2 files, and when possibile I used them. However, for the block layer/filesyste m combo which does not support snapshots, I used QCOW2 to have at least partial feature parity with the more flexible alternatives.
 
 
#18 Yonsy Solis 2016-05-30 16:28
ok, do you try to use distribution provided kernels when possible, but when you integrate a filesystem from and external module (ZFS from ZFS on Linux) and another filesystem from the provided Kernel (BTRFS) with the old characteristics from this, your camparison get invalid.

ZFS get more updates without upgrade the kernel. This is not the case with BTRFS that need updated kernel. The kernel version is important to know in this case (and will need to be updated to a comparison used in Enterprise distributions, Uubntu 16.04 LTS for example implements 4.4 kernel now)
 
 
#19 Brian Candler 2016-12-15 09:37
For "raw images ZFS", do you mean you created a zvol block device, or a raw .img file sitting in a zfs dataset (filesystem)?
 
 
#20 Gionatan Danti 2016-12-15 09:52
Quoting Brian Candler:
For "raw images ZFS", do you mean you created a zvol block device, or a raw .img file sitting in a zfs dataset (filesystem)?


The latter: raw images on a ZFS filesystem
 

You have no rights to post comments