Sysbench file benchmark

Filesystem I/O performances are a difficult thing to profile. For this reason, I run another set of sequential and random I/O transfer benchmarks using the sysbench utility. Sequential speed tests were run with 2 MB big blocks, while random speed with 4 KB blocks.

Let's start with sequential speed:

Sysbench sequential test

While in normal, cached mode the two filesystems are quite well matched each other, in the synchronous test we see some divergence: XFS is faster in sequential write, while EXT4 is faster in sequential read.

Please note that EXT4 sequential read is higher in synchronous mode than in the normal one: can this be related to a delayed allocation side effect? Remember that in normal mode, sysbench's test issue one fsync() per 100 writes, while in synchronous mode it issue one fsync() for each write, effectively disabling the delayed allocator. My two cents are that if the read speed of the just-written files are greater in the latter mode, it can be that the delayed allocation feature something can lower performance.

Now, random speed:

Sysbench random test

I'm not sure how to interpret XFS random read speed, as it seems to be higher that the theoretical maximum speed (considering a 4 ms rotational delay, 4 KB blocks and 5 active data disk we end with ~5000 KB max speed). Probably, when using XFS, this read benchmark is greatly influenced by OS caching and/or read-ahead setting. Write speed seems fine though, and we see that XFS is faster here, by quite a large margin. However, the absolute results are very low: this is, again, a consequence of the mechanical nature of current hard disks and the lack of any caching by the controller/disks combo.