Linux I/O schedulers benchmarked - anticipatory vs CFQ vs deadline vs noop

Written by Gionatan Danti on . Posted in Linux & Unix

User Rating:  / 23
PoorBest 

Testbed and methods

The benchmarks were performed on a system equipped with:

  • PhenomII 940 CPU (4 cores @ 3.0 GHz, 1.8 GHz Northbridge and L3 cache)
  • 8 GB of DDR2-800 DRAM (operating in unganged mode)
  • Asus M4A78 Pro motherboard (AMD 780G + SB700 chipset)
  • four 500 GB disks in AHCI mode (with NCQ) + software RAID 10 (near) configuration
  • S.O. Linux RHEL 6.3 x64

To cover both standard, direct and synchronized read/write operations, I used a mix of real-world and synthetic tests:

  • untar, sync and cat system utilities represent some very common usage patterns;
  • the single and multi-thread kernel compile tests simulate a developer-focused environment;
  • iometer enable us to measure direct I/O performances;
  • sysbench and pgbench programs put the strain on synchronous read and write requests.

The benchmarked schedulers were selected through /sys/block//queue/scheduler entry, and the software controlled queue size was 128 elements for each device, with a total of 512 elements for my four disks setup. Moreover, the AHCI and NCQ settings ensure that each disk as an on-board, hardware-controlled queue of 31 element in size. For a four disk setup, this add up to another 124 elements.

Comments   

 
#1 Ren 2013-11-22 07:11
Thanks for this very interesting post. Perhaps you could benchmark the different schedulers including BFQ (http://algo.ing.unimo.it/people/paolo/disk_sched/) next time?
 

You have no rights to post comments