Synthetic benchmarks: Linux DD utility
The next test measure the maximum sequential array read/write speed by using the Linux DD utility. The command used to run the benchmark was: dd if=/dev/zero of=/opt/zeroes bs=2097152 count=16000. This command create a 64 GiB file, using 2 MiB chunks. For read speed, simply invert the if and of parameters.
As you can see, the results are quite in line with the expectations: in read operations, RAID 5 is comparable to RAID 0 results, while providing much better resilience against data loss. Speaking about reads, the RAID 1 result is noteworthy, as it show that the Linux software RAID 1 implementation can not stripe a read onto multiple disks.
Write speed give us a somewhat different picture: RAID 5 is now not comparable to RAID 0 as it is at RAID 10 level. Considering that, in this setup, RAID 5 has 3 actively disks striping data in each moment, and considering that with 2 MB chunks we should not hit the read-modify-write behavior, here we can see the impact of the added management operations needed by this complex RAID mode (in the form of more software-side work and probably some additional disk seeks).
What about CPU load?
The first thing to note is that, with a speedy quad-core processor, CPU load is quite low for each storage subsystem setup (please note that this is the averaged CPU load across all 8 logical cores).
As you can see, the most CPU hungry array is the RAID 5 one, while the single disk configuration causes the least CPU load; however you should consider that the former transfers data at greater speed than the latter.
So, what about CPU efficiency, measured as % CPU load for each transferred MB?
This chart show us the CPU cost for a single transferred MB of data. From this chart, we can see that, while the single disk setup is very efficient on read operations, it badly perform in write test. The RAID 5 array also perform poorly, as it has the highest read cost and a very high write cost. In fact, excluding the single disk setup, only RAID 1 (which must replicate all writes to the four disks) has an higher write cost. The winner here is probably the RAID 10 setup, which combine high reliability and good performance with a low CPU overhead.