In common scenarios, storage subsystem speed represents a significant performance bottleneck: classical electromechanical disks are very good at areal density and sequential read/write speed, but they are terribly slow at random and/or mixed read/write operations. Moreover, being  electromechanical devices, normal hard-disks are quite prone to failures and malfunctions.

While SSD (and even more expensive RAM-DRIVE) significantly addresses this problem, the reality is that platter-based disks are currently the dominant storage media, and this situation will hardly change in the following 3-5 years.

In order to increase performances and reliability, many servers/workstations/NAS combine, in differnt fashion, multiple disks into single logical drives. This multi-disk-for-a-volume schema is called RAID – Redundant array of independent (or inexpensive) disks. You can read more on the subject here and here.

Sometime RAID volumes are managed by an application-specific card/circuit, the so called “hardware RAID controller”. Other times, RAID arrays are managed by a software driver inside the operating system, creating a “software RAID” volume. Linux has an advanced software-RAID layer that not only supports different RAID level (eg: RAID 0, RAID 1, etc.), but has quite good performance.

This article will focus on a specific RAID level: the RAID 10 (or 1+0) configuration. We will dive into configuration details and benchmarks of this speedy and reliable RAID setup.