Synthetic benchmarks: I/O Meter tests

Intel I/O Meter test is a very good disk benchmark. It is configurable in both the access pattern (sequential or random, read or write) than in the command queue depth (the number of outstanding I/O requests to be concurrently served). I have benchmarked with queue depth varying from 1 to 32, but as the number where very similar, and as in server space a deep I/O queue is the normality, I provided you the charts from QD=32 only.

Lets see as the various storage subsystems cope with this benchmark. First, we have the transfer rate numbers:

Intel IOMeter - transfer speed

It seems that on small access (4 KB and 256 KB) the various array perform the same, right? Well, no: the point is that the scale of the chart is severely affected by the sequential read / write speed. Speaking about the sequential test, we note that the absolute values are way lower that those registered by dd. However, the relative standing are the same, with a more pronounced RAID 5 write penalty.

Sequential speed are only part of the story – and often, they are the small part. Random speed is often way more important. So, let examine the same data from a different standpoint: lets look a I/O operations per seconds.

Intel IOMeter - I/O operations per seconds

Here we see a different picture: while read speed is comparable across the arrays, the 4 KB write results are really different! In this case, we see how badly RAID 5 perform when, by using small writes, we hit the read-modify-write behavior: this array configuration perform only at ~60% than single disk configuration. For comparison, the RAID 10 array (which itself is hampered by the RAID 1 overhead) is 4X faster. RAID 0 results are out of the window, but remember: a single, failed disk will costs you all your data...

In the mixed 256 KB random test (50% write / 50% read) we see a more balanced pictures, but RAID 5 is again the slower.

Another point of view of these data are the average access time (it is the inversed of the previous graph):

Intel IOMeter - access time

The benchmarked server was build with power efficiency in mind, and the installed disks are from the WD Green series. This means that these disks are not stellar performers, and the access time data confirm this: all setup need a minimum of 20 ms to read 4 KB from the disks. In write tests, the deferred write capability of these disks (equipped with a well-sized 32 MB RAM cache) show up, with a write access time way lower than the read access time. The only exception to this rule, you guessed it, is the RAID 5 setup.

One note: in a ideal word, the 4 KB read test have to show us a great advantage for RAID 0 setup. Why? Because when the requested data chunk is smaller than the stripe size, the disks forming the array should be able seek concurrently and independently, effectively enabling a disk to serve its own 4 KB request. With four disks, we should have a four time smaller average access time, but this is not the case: probably the Linux implementation is not able to do this. In write test, the results are near 4X better than single disks, courtesy of the 4X larger total disks RAM cache and the optimal deferred write algorithm of WD disks.

At last, let see the CPU load:

Intel IOMeter - CPU load

Please note that I/O Meter had problems in recording CPU load on my system, so I reverted to another method to measure these data (I used the stats collected in /proc/stat), and these data represent the average CPU load on the entire I/O Meter test. Here we see a very low CPU usage, with RAID 5, 0 and 10 using more CPU than the others. Why these number are so low in caparison to dd data? Simply because the I/O Meter test have a lot of random accesses, where the CPU is sitting idle, waiting for the disks head/platter combo to retrieve (or write) the data.