Linux I/O schedulers benchmarked - anticipatory vs CFQ vs deadline vs noop

Written by Gionatan Danti on . Posted in Linux & Unix

User Rating:  / 23
PoorBest 

Direct I/O performance – random 4KB operations

While not super-common (its use is more or less discouraged under Linux), direct I/O remain important for some video/audio programs, database and virtualization systems. A wonderful application that enable us to test direct I/O performance in a number of scenarios is Intel IOMeter. I used this application to track random and sequential read/write speed with an ever-increasing number of threads (from 1 to 8 threads).

First, let have a look at random 4KB read operations (thread count is on the X axis, while IOPS are on the Y axis):

4KB random read

Whit the exception of anticipatory, all schedulers perform similarly here.

Now, random 4K writes:

4KB random write

We see more variance here, with CFQ and deadline showing the best results.

What about mixed read/write random 4KB performance?

4KB random mixed

This is Interesting: under this very stressful test, the anticipatory and noop schedulers show very low results at low thread counts. The bad anticipatory show can be due to its artificially-induced delay between I/O operations, while noop suffer from the limited reorder window (with noop, the software based queue is treated as a FIFO buffer, and requests rearrangement can be done only into AHCI-NCQ provided hardware queues).

Comments   

 
#1 Ren 2013-11-22 07:11
Thanks for this very interesting post. Perhaps you could benchmark the different schedulers including BFQ (http://algo.ing.unimo.it/people/paolo/disk_sched/) next time?
 

You have no rights to post comments