KVM storage performance and cache settings on Red Hat Enterprise Linux 6.2

Written by Gionatan Danti on . Posted in Virtualization

User Rating:  / 27
PoorBest 

KVM cache modes overview

Normally, a virtual guest system use an host-side file to store its data: this file represent a virtual disk, that the guest use as a normal, physical disk. However, from the host's view this virtual disk is a normal data file and it may be subject to caching.

In this context, caching is the process to “hide” some disk-related data to physical RAM. When we use a cache to store in RAM only previously read data, we speak about a read cache, or a write-through cache. When we also store in RAM some data that will be later flushed to disk, we speak about a read/write cache, or write-back cache. A write-back cache, by caching write request in the fast RAM, has higher performance; however, it is also more prone to data loss than a write-through one, as the latter only cache read requests and immediately write to disk any data.

As disk I/O is a very important parameter, Linux and Windows o.s. generally use a write-back policy with periodic flush to the physical disk. However, when using an hypervisor to virtualize some guest system, you can effectively cache things twice (one time in the host memory and another time in the virtual guest memory), so you can disable host-based caching on the virtual disk file and to let the guest system to manage its own caching. Moreover, a host-side write-back policy on virtual disk file used to significantly increase the risk of data loss in case of guest crash. However, as you will soon find, thank to a new “barrier passing” feature, this may not be the case now.

 

You have no rights to post comments