KVM storage performance and cache settings on Red Hat Enterprise Linux 6.2

Written by Gionatan Danti on . Posted in Virtualization

User Rating:  / 27
PoorBest 

Testing the barrier-passing feature

Write barrier passing is not the newest feature: it exists since some time now. However, it was not always enable, as it initially only worked on VirtIO based virtual device and only in specific Qemu/KVM build. So the first thing to do is to test its effectiveness in the RHEL 6.2-provided Qemu packages and with standard KVM wizard-created virtual machines.

But how can we test this features? One smart, simple yet reliable test is to use, inside the guest OS, the Linux “dd” utility with appropriate command line flags: direct and sync. The first flag enable direct access to the virtual storage device, bypassing the guest pagecache but not issuing any write barrier command on the guest side. Moreover, a direct guest write hits some of the host-based caches (which specific cache depend on KVM cache setting – see the picture with the red, gray and blue arrow published before). This barrier-less direct write enable us to simulate a “no-barrier-passing” situation.

The second flag enable us to run another dd write pass with both direct and sync flags set, meaning that the guest will now generate a write barrier for each write operations (remember that the sync flag should guarantee a 100% permanent write, so a write barrier is needed and it will be issued if the guest OS supports it).

Ok, its time to give you some number now. First, let we see how a VirtIO-based virtual machine performs in these “dd” tests:

Barrier passing testing with VirtIO driver

We can start our analysis from the host side: as you can see, the sync-less writes are much faster than the synched ones. The reason is simple: while the first can use the physical disk cache for temporarily storing data, the latter force any data to physically go to the disk platters (for permanent storage). It is interesting to note that, while with different absolute values, the “nocache” and “writeback” guests results follow the same pattern: this prove the effectiveness of barrier-passing feature. If this feature didn't work, we should see synched speed on par with not-synched one, but this didn't happens. But why the direct write-back score is so high? It is probably an artifact of the VirtIO driver; perhaps the driver is collapsing multiple writes into one single data transfer.

See the very low “writethrough” scores? They are due to the “always-sync” policy of this cache mode. You can see that, while very secure, this cache mode is way slower than the others.

What happens if we had to use the older, but universally-compatible IDE driver?

Barrier passing testing with IDE driver

While absolute values are quite different from the previous ones, the IDE block driver give us the same end result: the write barrier-passing feature is up and working. And again we see the very low write-through result.

Be sure that barrier-passing feature is working is not small feat: we can now comfortably use the faster-performing cache mode, the writeback one. Please note that the above results were almost identical for both EXT3 (with barriers enabled) and EXT4 filesystems.

You have no rights to post comments