Vmware vs Virtualbox vs KVM vs XEN: virtual machines performance comparison

Written by Gionatan Danti on . Posted in Virtualization

User Rating:  / 167

I/O benchmarks: Windows 2008 installation time

A critical parameters for virtual machine are I/O performances: while a 10% loss on CPU speed can be a minor problem (as CPU performance are almost always greater than needed), a 10% speed loss on I/O performances can hardly be a no-problem event. So, it is crucial that each hypervisor do its best to cause the smallest possible overhead on I/O operations.

To offer you a 360-degree view on the problem, I run very different I/O benchmarks.

The first is Windows 2008 install time: for this test, I've measured the time needed for a Windows 2008 full installation. The timer was started at the initial file copy operation (right after the partitions definition) and stopped at the end of the first installation phase (right before the system ask to be restarted):

Windows 2008 installation time

As you can see, VirtualBox is the clear winner: it took less that 6 minutes to complete the operation.

VMware and Xen are very closely matched, while KVM is the real loser: it took over 30 minutes to complete, a 5-fold increase compared to VirtualBox! What can be the problem here? It is a vdisk image format related problem, or it is an indicator of a serious I/O overhead?

To reply the above questions, I run the same test in different conditions:

Windows 2008 installation time with QCOW2 and RAW

In the above graph, I shown the results of Windows 2008 install using three different disk image setup:

  • a normal, dynamic QCOW2 image

  • a normal QCOW2 image with 10 GB of preallocated space

  • a raw image

Using a raw image let us to bypass any possible problem related on cache type and QCOW2 block driver (see Qemu documentation for more infos), while using a preallocated QCOW2 image we can measure the cost of the dynamic increment feature.

The records speak themselves: while the install time remain quite high, using both a RAW image or a preallocated QCOW2 image bring us an interesting boost. In other words, it seems that KVM has a very high I/O overhead and then, stripping some of the I/O operations using a preallocated image or a RAW image (which is, by definition, preallocated) give us a noticeable speed increment.

This theory is supported also by direct observation of the time needed to load the Windows 2008 installer from the CD: KVM was the slowest, indicating slow I/O performances not only on disks but also on removable devices as CD-ROM.

So now we know we KVM was so slow, but why was VirtulBox so fast? Probably, VirtualBox configure its write cache as a write-back cache type, which is faster that a write-throught type but it is also more prone, in certain circumstances, to data loss. This is a perfect example of how the different “roots” ot the analyzed hypervisor emerge: VirtualBox was created as a desktop product, where speed is ofter more important that correctness. The other hypervisors use the opposite approach: they sacrifice speed on the altar of safety; however, they can be configured to behave as VirtualBox (using a cache-back policy).

UPDATE: a recent article comparing KVM vs VirtualBox can be found here: http://www.ilsistemista.net/index.php/virtualization/12-kvm-vs-virtualbox-40-on-rhel-6.html


#1 Nathan 2012-09-12 03:12
This is a terrible review, to install the VMware paravirtual drivers but not the KVM Windows paravirtual drivers. All results from VMware must be discarded for comparison purposes.
#2 Marcelo 2015-11-15 03:16
A quick comparison I made between VMware Workstation Player and VirtualBox, with XP as guest, shows a ridiculous I/O advantage of VB, while VMware has a big advantage on 3D graphics.
#3 Gionatan Danti 2015-11-15 09:32
Quoting Marcelo:
A quick comparison I made between VMware Workstation Player and VirtualBox, with XP as guest, shows a ridiculous I/O advantage of VB, while VMware has a big advantage on 3D graphics.

Hi Marcelo,
VBox higher I/O speed probably is an artifact of VBox not honoring write barrier (synchronized writes) by default. While this give much higher speed, storage consistency is somewhat reduced and I do not suggest to disable write barriers on production host/machines.

You have no rights to post comments