RAID 10 layouts
RAID10 requires a minimum of 4 disks (in theory, on Linux mdadm can create a custom RAID 10 array using two disks only, but this setup is generally avoided). Depending on the failed disk it can tolerate from a minimum of N / 2 – 1 disks failure (in the case that all failed disk have the same data) to a maximum of N – 2 disks failure (in the case that none of the failed disk has identical data).
RAID10 is the combination of RAID 0 and RAID 1. This can be implemented as a nested levels or as a native levels. In the first case the operating system effectively use two software drivers to manage the RAID array: the first, higher level RAID 0 driver stripes you requests on top of two virtual disks that really are, in turn, a RAID 1 array composed by two or more physical disks managed by a lower level RAID 1 driver. In the latter case (a “native level”), the operating system use a single RAID driver capable to understood this complex RAID level and to directly manage the disks, without relying on other RAID implementations.
Linux software raid has native RAID10 capability, and it exposes three possible layout for RAID10-style array: near (default), far and offset. These layouts have different performance characteristics, so it is important to choose the right layout for your workload. But how they differ?
I prepared three diagrams showing how data layout is affected by the three “near”, “far” and “offset” options. These diagram are somewhat simplified; for a full, detailed explanation you should read md(4) manpage.
The first graph depicts a RAID10 NEAR layout:
As you can see, the default near layout is very similar to a nested RAID1+0 setup. For example, assuming a 2 MB sized write and a 512 KB chunk size, the host write is first broken into two 1 MB stripes and in turn in four 512 KB chunks. Finally each chunk is replicated between consecutive device.
Over a single disk scenario, a four disk RAID10 near setup should have the following maximum (best-case) performance profile:
- 2x sequential read speed (sequential read access can be striped only over disks with different data)
- 2x sequential write speed (while writes can engage all four disks, remember that two disks are in RAID1 fashion)
- 4x random read speed (random read are bound to the number of active spindles, four in this example)
- 2x random write speed (again, writes need to be replicated).
Now it is the turn of FAR layout:
As you can see, things are considerably different: the disks are effectively traversed by two RAID0 sets, and each half-disk stores a different set of data. The second stripe replicates the first one. Please note that the mirrored chunk continue to be the “base unit” of the array, meaning that a failed disk will not bring down the entire array.
Maximum performances over single-disk mode should be:
- 4x sequential read
- 2x sequential write
- 4x random read
- 2x random write
So, it seems that the far layout is always better or at least on par with near, right? Well, no. The far layout has a weak point: as the two data copies are placed far away to each other, in random write and mixed random read/write workloads the disks will spend much more time in seeking. As seek time is the dominant factor in random workloads, and random workloads are generally the dominant usage pattern, chances are that a far layout will performs quite lower that a near layout.
Finally, we have the OFFSET layout:
As the name imply, it is somewhat similar to a “far” layout, but with the difference that the multiple data copies are placed quite near each other. For example, A's copy is placed on the consecutive disk at a one-chunk offset from original A location.
Maximum performance over single-disk are equal to far layout:
- 4x sequential read
- 2x sequential write
- 4x random read
- 2x random write
But we have a difference: as the data copies are near the original location, disk seek time is greatly reduced compared to the standard far layout. This means that the single critical problem of far layout should be solved, without impacting too much on its very good sequential speed (that are going to be only a little lower).
UPDATE 22/09/2013: as noted by reader Alberto Lauretti (thank you!) in comment #7, "offset" layout have slightly lower reliability then near of far layouts. The point is that any failures involving two consecutive disks (eg: first and second disks, second and third disks, ecc) will lead to data loss. This is a direct consequence of having your data "scrambled" by the offset layout. Anyway, this should be of minor concern: any RAID10 array with a failed disk should be immediately repaired replacing the failed disk, as even near and far layouts are exposed to data loss if a second disk failure happens (albeit with lower probability than offset layout).
Will benchmark results confirm of deny these considerations? Let's see...