Virtual machines storage performance is a hot topic – after all, one of the main problem when virtualizing many OS instances is to correctly size the I/O subsystem, both in term of space and speed.
Of course performance is not the only thing to consider: another big role is played by flexibility and ease to use/configure. So the perfect storage subsystem is the right compromise between performance, flexibility and ease to use.
In a somewhat ironic manner, of the three requisites written above, performance is the most difficult thing to measure, as“I/O speed” is an ephemeral concept. It is impossible to speak about I/O performance without taking into account three main parameters: I/O block size, I/O per seconds (IOPS) and queue depth (the number of outstanding, concurrent block requests). This represent the first problem to correctly size your disk subsystem: you had to guess about the expected access pattern, and you better guess right.
Another pitfall is how to provision (allocate) space inside the configured storage subsystem: it is better to use Fat or Thin provisioning? What are the performance implications?
At the same time, flexibility and ease to use are easy things to sell: I often read of how modern, CoW filesystems as BTRFS and ZFS have plenty of features and of how many peoples recommend to use them for performance-sensitive tasks as virtual machines storage.
Sure BTRFS and ZFS have a host of slick features, but are they really suited for storing “live” virtual machines data, or our beloved legacy filesystems as EXT4 and XFS are better suited for this task? Or what about going a layer down, directly playing with Logical Volumes?
I'll try to answer these questions. Anyway, please keep in mind that the result of any benchmark is of relative values – I don't pretend to elect the uber-mega-super-duper I/O configuration. I only hope to help you selecting the right tool for the right job.