BTRFS EXT3 EXT4 XFS and KVM virtual machine: a host-side filesystem comparison

Written by Gionatan Danti on . Posted in Linux & Unix

User Rating:  / 14
PoorBest 

Testbed and methods

When benchmarking filesystems through virtual guest instances, extreme care should be put to avoid to benchmark the wrong thing (eg: host-side pagecache vs real disk access speed). In order to put a focus on filesystem speed and less of them on hypervisor-specific configuration, I kept KVM guest machine configuration very simple, with:

  • a 32 GB thin (not-preallocated) QCOW2 virtual HD image file 
  • IDE controller with cache policy set to none 
  • 2048 MB guest RAM 
  • Windows 7 x64 as guest OS 
  • all other things were kept at default settings. 

The host system was a Dell D620 laptop. While I understand that this is not your typical virtualizer platform, remember that we are testing filesystems performance here, not hardware by itself. The detailed host specifications are:

  • Core2 T7200 CPU @ 2.0 GHz 
  • 4 GB of DDR2-667 RAM 
  • Quadro NVS110 videocard (used in text-only mode) 
  • a Seagate ST980825AS 7200 RPM 80 GB SATA hard disk drive (in IDE compatibility mode, as the D620's BIOS does not support AHCI operation) 
  • O.S. Fedora 17 amd64 with kernel version 3.4.0-1.fc17.x86_64 

The internal hard disk was partitioned into three slices: a first ~9 GB ext4 partition for the root directory, a second ~4 GB partition for the swap file and a third ~60 GB partition (mounted on /opt) for testing purposes (HD images were kept here).

Guest performances were measured by different means:

  • Windows 7 (first-phase) install time
  • Random read / write speed both with low and high queue dept 
  • Sequential read / write speed both with low and high queue dept 

Random / Sequential speed where measured using the latest IOMeter stable build (2006.07.27).

Please note that this is only a first journey into benchmarking filesystems performances related to virtual machine consolidation. I hope to do better tests in the future. Obviously, if you have any idea, let me know it!

Comments   

 
#1 NGRhodes 2012-07-25 12:07
Hi,

When looking at fragmentation, do not forget that EXT4 supports a maximum 32K blocks in an extent, which at the default 4KB block size is 128MB.
As your benchmarks show, does not appear to be a performance limitation.
What would be more interesting is rerunning the benchmarks on an aged filesystem and compare how performance degrades with age.
 
 
#2 Gionatan Danti 2012-07-30 11:20
Hi NGRhodes,
sorry for the late reply.

You are correct about EXT4 extent size, but theoretically nothing prevents 2 extents to be placed one after the other. In fact, the filefrag utility checks if two extents are consecutively placed and, in this case, it does not report two different fragments, but only one.

Do you have any proposal about the aged filesystem? Any idea is welcomed ;)
 
 
#3 Jack Douglas 2012-10-14 21:15
Hi,

Thanks for the very helpful article. Would you be open to the suggestion of running the same tests with OCFS2 on a single node (it is the only fs other than btrfs with 'reflink' file snapshots that are very useful for backing up VMs)

Kind regards
Jack
 
 
#4 Boki 2016-04-21 12:59
Thanks for great article, and now, almost 4 years later, is there any new diff results or suggestions?
 

You have no rights to post comments