Direct I/O performance – random 4KB operations
While not super-common (its use is more or less discouraged under Linux), direct I/O remain important for some video/audio programs, database and virtualization systems. A wonderful application that enable us to test direct I/O performance in a number of scenarios is Intel IOMeter. I used this application to track random and sequential read/write speed with an ever-increasing number of threads (from 1 to 8 threads).
First, let have a look at random 4KB read operations (thread count is on the X axis, while IOPS are on the Y axis):
Whit the exception of anticipatory, all schedulers perform similarly here.
Now, random 4K writes:
We see more variance here, with CFQ and deadline showing the best results.
What about mixed read/write random 4KB performance?
This is Interesting: under this very stressful test, the anticipatory and noop schedulers show very low results at low thread counts. The bad anticipatory show can be due to its artificially-induced delay between I/O operations, while noop suffer from the limited reorder window (with noop, the software based queue is treated as a FIFO buffer, and requests rearrangement can be done only into AHCI-NCQ provided hardware queues).