Vmware vs Virtualbox vs KVM vs XEN: virtual machines performance comparison

Written by Gionatan Danti on . Posted in Virtualization

User Rating:  / 167

Testbed end Methods

The contenders of this test round are four free virtual machine software:

  • VMware Server, version 2.0.2

  • Xen, version 3.0.3-94

  • KVM, version 0.83

  • VirtualBox, version 3.1.2

All those virtualizer were tested on a self-assembled PC on which only one guest instance was run. Below you can find the hardware configuration and guest settings:





AsRock P55Pro (Intel P55 chipset + ICH10R)



Intel i860 (2.8 Ghz core speed, 4 x 256 KB L2 cache and 8 MB L3 cache)


# of Cores




8 GB DDR3 @ 1333 Mhz

2 GB


4 x 1 TB WD GreenPower disks in RAID10 configuration

125 GB max – dynamically allocated


Nvidia 8400GS w/256 MB DDR2 VRAM



64 bit CentOS 5.4 with kernel version 2.6.18-164.9.el5

64 bit Windows Server 2008

As I want to evaluate the out-of-the-box experience and the auto-configure capabilities of the various hypervisor, the virtual machine were created with default settings. The only exception to this rule was the enabling of the nested page table feature on VirtualBox (as the other virtualizer can auto-enable this feature and were run with nested page table enable, I feel unfair to not enable it on VirtualBox also).

VMware and Virtual box offer you the possibility to install in the guest OS some additional packages which should improve performances and/or integration with the host system. For this review, I installed the VMware guest addition but not the VirtualBox addition. Why? Simply put, while VMware's guest tool provide some very important performance-enhanced driver (as a paravirtualized network driver), the VirtualBox addition seems to focus only on providing a better mouse/keyboard support and on enabling more resolutions for the virtual video adapter. This is in-line with my previous tests, where I note that VirtualBox's performance level was not touched at all by its guest addition (the video emulation was even a bit slower with guest tools installed).

A note about Xen: while the other hypervisors examined today run on top of a completely “standard” and unmodified host operationg system, Xen use another paradigm. At the lowest end, near the hardware, it has a smaller “bare-metal” hypervisor. On top of that first layer run the hosting operating system (to use Xen terminology, it runs on Dom0), that has the special privileges to talk with the hardware (using the hardware drivers) and to start other, unprivileged guest OS (in the so called DomU).

All the benchmark data collected are relative to this single guest OS instance. While is not very realistic to run a single guest OS instance, I think that the data collected can be very indicative of the virtualizators relative performances and of its overhead.

The collected data cover two type of workload:

  • a synthetic one, made of very specific tests (eg: CPU test, memory test, etc)

  • a real-world one, made recording the performances exhibited by some real server application (most notably, Apache 2.2.14 and MySQL 5.1.41)

The synthetic test should give us some perceptions about what kind of operation slow down the virtual machine and, so, where a virtualizer is better (or worse) than others. However, keep in mind that synthetic tests can describe only half of the truth: to have a more complete understanding of the situation, we need to examine “real-word” results also. In this article, we will evaluate the performance of two very important services: Apache and MySQL. All the tests below (with the exception of IOmeter) were run 3 time and the results were averaged.

Also, please keep in mind that, while proper hardware benchmarking is itself a very difficult task, benchmarking a guest operating system, which run on top of another, hosting operating system, is even more difficult. Try to isolate the pure guest performance can be very tricky, especially regarding I/O performance. The reason is clear: the benchmark method must not only account for the guest-side I/O caching, but also for host-side I/O caching. To alleviate the host-side cache effects (which can really alter the collected data), I run a host-side script that actually synchronized the host-side write cache and then drop both write and read caches before the execution of any I/O sensitive benchmark. While you can argue that caching is a very important source of performance and a virtual machine can use it very efficiently (and you are right), this article aim to isolate the hypervisors performances on some very specific tasks and, to present you reproducible results, I had to go for the route of drop host-side cache between each test run.

Let me repeat it another time: this article does not aim to elect the best-of-all-and-in-all virtual machine software. A different benchmark methodology can give different results, and, in the future, I will do many other tests in many other environments. For example, the task of running many different guest OS will be the subject of a following article.

So... it's time for some number. Let see some synthetic benchmarks results first.

UPDATE: a recent article comparing KVM vs VirtualBox can be found here: http://www.ilsistemista.net/index.php/virtualization/12-kvm-vs-virtualbox-40-on-rhel-6.html


#1 Nathan 2012-09-12 03:12
This is a terrible review, to install the VMware paravirtual drivers but not the KVM Windows paravirtual drivers. All results from VMware must be discarded for comparison purposes.
#2 Marcelo 2015-11-15 03:16
A quick comparison I made between VMware Workstation Player and VirtualBox, with XP as guest, shows a ridiculous I/O advantage of VB, while VMware has a big advantage on 3D graphics.
#3 Gionatan Danti 2015-11-15 09:32
Quoting Marcelo:
A quick comparison I made between VMware Workstation Player and VirtualBox, with XP as guest, shows a ridiculous I/O advantage of VB, while VMware has a big advantage on 3D graphics.

Hi Marcelo,
VBox higher I/O speed probably is an artifact of VBox not honoring write barrier (synchronized writes) by default. While this give much higher speed, storage consistency is somewhat reduced and I do not suggest to disable write barriers on production host/machines.

You have no rights to post comments