ZFS, BTRFS, XFS, EXT4 and LVM with KVM – a storage performance comparison
Page 6 of 11
Sysbench complex test
Ok, preparing the DB resulted in mixed results, but what about running the sysbench complex test against it?
At first, results are very low for all the contenders. However, you should remember that this benchmark runs concurrently to the others (fileserver, varmail and webserver), so the host had a very hard time keeping pace with the outstanding I/O operations.
Basically, any setup apart BTRFS and ZFS give similar performance. Even ZFS can be deemed acceptable. However, BTRFS is a complete fiasco here: it is 10 times slower than ZFS.
Now, have a look at latency:
A very similar pattern emerge. In other words, don't use BTRFS for a PostgreSQL installation, nor for any storage subsystem backing a PostgreSQL VM.
Comments
- Disable COW on the folder containing VM image files (to reduce write amplification)
- Disable QCOW2 and use sparse RAW for VM image files (to reduce fragmentation of extents apparently caused by QCOW2 block mapping algorithm)
Both tests were on a Linux 4.2 kernel. The QCOW2 cluster size was 64K in the test using QCOW2. I only tested with COW disabled. The performance difference is likely even greater with NOCOW + RAW versus COW + QCOW2.
To convert VM images, the following commands are useful:
$ chattr +C new_images/
$ truncate -s 100G new_images/vm1.raw
$ qemu-nbd -c /dev/nbd0 old_images/vm1.qcow2
$ dd conv=notrunc,sp arse bs=4M if=old_images/v m1.qcow2 of=new_images/vm1.raw
Shut down virtual machines before conversion, change XML to point to new files and restart virtual machines when done.
But that makes btrfs useless. No snapshots, no checksumming. It's fair to test with CoW - do you have any numbers for that?
I'm taking it you forgot to mount BTRFS with compression enabled (which really should be the default)?
Can you please test BTRFS and mount sure you're mounting with the compress=lzo option ?
QCOW2 is also a very suboptimal for modern VMs, in reality you'd always use raw devices or logical volumes.
It would be interesting to see you re-run these tests using a modern kernel, say at least 4.4 and either raw block devices or logical volumes along with mounting BTRFS properly with the compress=lzo option
No, I did not use any compression (which, by the way, was disabled by default).
I stick to distibution-pro vided kernels when possible, and 3.10.x is the current kernel for RHEL7/CentOS7.
Finally, I agree that RAW images are marginally faster than preallocated QCOW2 files, and when possibile I used them. However, for the block layer/filesyste m combo which does not support snapshots, I used QCOW2 to have at least partial feature parity with the more flexible alternatives.
ZFS get more updates without upgrade the kernel. This is not the case with BTRFS that need updated kernel. The kernel version is important to know in this case (and will need to be updated to a comparison used in Enterprise distributions, Uubntu 16.04 LTS for example implements 4.4 kernel now)
The latter: raw images on a ZFS filesystem
RSS feed for comments to this post