RAID levels: a recap
Over time, many RAID level have been used. Note that this page is not intended to be a top-down discussion about the various RAID setup; for more details, please read the Wikipedia page linked above. From the Linux supported RAID levels, the most commons are:
RAID 0
Pro: Highest performances; highest array capacity
Cons: Worse reliability
Description: it require at least 2 disks (or partitions).
The data to be read / written to the disk are split in two or more pieces (called stripes) that are concurrently sent to or received from the disks. This approach means that, using N disks, the total array capacity will be N * single disks capacity and that, in optimal conditions, the storage subsystem will also be N times faster than a single disk. Speaking about reliability, this RAID level is at a great disadvantage: if a single disk fails, you lose all your data. For this reason, RAID 0 is almost never used in servers.
RAID 1
Pro: Highest reliability
Cons: Single disk-like speed; worst array capacity
Description: it require at least 2 disks.
It works by using the second (and subsequents) disk to create a mirror image of the first one. This means that writes are sent to all disks in the array. This level effectively guarantee you the capacity of only a single disk, but has a high resilience to data loss: in a N disk array, you can lose all but one disk before to lose you data. Speaking about performance, we have a mixed bag: while reading can theoretically be faster than a single disk setup (you can read from multiple driver concurrently), write performance will be, at most, at the level of a single disk setup. Many RAID 1 implementations, however, does not exploit this theoretical advantage on read speed. Let see if Linux software RAID can do this...
RAID 5
Pros: In optimal conditions, it is speed-comparable to a RAID 0 setup; good resilience against data loss; excellent array capacity
Cons: In certain circumstances, write speed can be very low
Description: it require at least 3 disks.
It works by striping your data over multiple disks and concurrently storing stripe parity on one of the available disks. For example, in a 3 disks setup, when you write some data the software array controller must:
-
stripe you data in the first two available disks
-
store parity information in the third disk
Many (if not all) RAID 5 implementation rotate the parity data on the available disks: for example, if a write use disks 0 and 1 to stripe data and disk 2 to store parity, the next write will use disks 1 and 2 to stripe data and disk 0 to store parity; a third write will use disks 0 and 2 to stripe data and disk 1 to store parity.
In this manner, using N disks, RAID 5 give you a total array capacity of (N-1) * single disk space and a maximum, theoretical performance equal to (N-1) * single disk speed. This array setup can tolerate the failure of one disk without lose any data; however the array will run in degraded mode, with reduced performance. Also, if you do not replace the failed disk, another disk failure will be critical and it will results in data loss.
For what written above, it seems that RAID 5 is a very interesting setup – and yes, it is indeed true: for this reason, RAID 5 is used on most server installations. However, there is a catch: while read requests does not involve parity operations, write requests need parity calculations. This means that CPU load can be higher using this RAID level. Also, write requests that are smaller than (N-1) * stripe size will require a read-modify-write operations: in case of small writes, the operating system does not have all the information require to calculate the parity informations and, in order to correctly calculate parity, it had to read the previous data and parity stored on the disks.
So, for small writes, we have the following events chain:
-
read the previously stored data stripe and parity informations
-
use the just read data+parity and the current write data+parity to calculate the new to-be-written data+parity
-
write this new data+parity on the appropriate disks.
This read-modify-write idiom can greatly affect applications that issue many small data writes. For example, databases generally does not perform well on a RAID 5 setup.
RAID 6
Pros: Higher reliability than RAID 5
Cons: Space efficiency is lower than RAID 5
Description: It require at least 4 disks.
RAID 6 is a variant of RAID 5 as, instead of storing only a single parity checksum for each stripe, it store two identical parity checksum. In other word, it provided added security against data loss by the means of replicating the parity data across two different disks. This means that the space efficiency is lower: total array space is equal to (N-2) * single disk space. For the rest, it is identical in RAID 5 and it share the same pro and cons.
RAID 10
Pros: Good all-around performances and high reliability
Cons: array capacity is at only 50% of total disks space
Description: it require a minimum of 4 disks (on Linux, mdadm can create a custom RAID 10 array using only two disks, but this setup is generally avoided).
It is the combination of RAID 0 and RAID 1. This can be implemented as a nested levels or as a native levels. In the first case the operating system effectively use two software drivers to manage the RAID array: the first, higher level RAID 0 driver stripe you requests on top of two virtual disks that really are, in turn, a RAID 1 array composed by two or more disks managed by a lower level RAID 1 driver. In the latter case, the operating system use a single RAID driver capable to understood this complex RAID level and to directly manage the disks, without relying on other RAID implementations.
This RAID level is in a “middle space” between RAID 0 and RAID 1: using N disks, the total array capacity/performance are N / 2 * single disk capacity/performance. Speaking about data loss resilience, depending on the failed disk it can tolerate from a minimum of N / 2 – 1 disks failure (in the case that all failed disk have the same data) to a maximum of N – 2 disks failure (in the case that none of the failed disk has identical data).
Generally, this is the RAID level recommended for database servers.