In the latest ten years, full-virtualization technologies gained much traction. While this sometime led to an excessive virtual machines proliferation, the key concept is very appealing: as CPU performance and memory capacity relentless grow over time, why do not use this ever-increasing power to consolidate multiple operating system instances on one single, powerful server?
If done correctly (ie: without an unnecessary grow of total OS instances), this consolidation process bring considerable lower operating costs, both from electricity and maintenance/administration standpoints.
However, in order to extract good performance from virtual machines, it is imperative to correctly size the host virtualizer: CPU, disk, memory and network subsystems should all be capable to sustain the expected average workload, and also something more for the inevitable usage peeks.
Usually, the most stressed component in a virtualized environment is the I/O subsystem, especially taking into account the very slow random read/write speed offered by mechanical disks. As covered in previous articles, KVM give you the choice to enable OS caching on the image file or LVM volume backing a VM's virtual disk.
As latest Qemu versions honor write barrier requests and pass them down to the host stack, using a write-back strategy is a real option. Sure, it is not a silver bullet: there are some cases where a write-back cache could represent a problem, for example in scenario involving live-migration operation (currently, libvirt/qemu advise you to not use writeback cache during live migration, or data corruption may happens). However in many cases it is an appropriate choice.
So, go straight to the point: how KVM cache setting affect VMs performance, host resources usage and consolidation ratio?