Testbed and methods
All tests were performed on this machine:
- Core i7 860 @ 2.8 Ghz
- Motherboard Asus P55 Pro
- 8 GB of DDR3 memory running @ 1600 Mhz
- Four 1 TB (931.5 GiB) Western Digital Green disks (model WD10EADS-00L5B1, firmware version 01.01A01, ~5400 rpm)
- Video card GeForce 8400GS
- S.O. RHEL 6.3 x86_64
Each disk was connected to a SATA port provided by the P55 chipset and was partitioned in the following mode:
- a first, EXT4 boot partition of ~0.5 GiB in RAID1
- a second, EXT4 system partition of ~32 GiB in RAID10 “near” layout
- a third, SWAP partition of ~4 GiB in RAID10 “near” array
- a fourth, XFS data partition of ~100 GiB configured in RAID10 array with different layouts (“near, “far”, “offset”).
Please note that, in order to reduce RAID array synchronization time, I use a 100 GiB partition for benchmarks. However, in final production server, the data partition was of about 900 GiB. This has some performance implication that we will discuss later.
For each RAID10 array, I used the default 512 KiB chunk size.
In order to quickly evaluate each RAID array layout, I used Intel I/O Meter with an 8 GiB test file. As it seems that, on Linux, IOMeter is unable to correctly scale up the queue depth above 2 (it uses O_DIRECT for opening the test file, and this serialize and synchronize access to it), I created an high queue depth by running 1, 2 and 4 “worker” threads. This setup simulates the load imposed by 1, 2, 4 and 8 concurrent I/O threads.
I understand that this is limited, imperfect testing. However, all Linux software RAID comparisons that I found seems to be so flawed that I/O Meter results alone should be much, much more accurate anyway.
So, it's time for some numbers...