Synchronous operations performance – PostgreSQL benchmarks
Most data-critical system make wide use of synchronized writes, both by opening the target file with O_SYNC or issuing regular fsync() calls. One of these fsync heavy application is PostgreSQL. Let see if the schedulers are able to significantly alter PostgreSQL performance. Please consider that all the following tests run with a concurrency factor of 32 threads.
First, we are going to see sysbench prepare time for a test database containing 100K rows:
In this very fsync() heavy test we do not see any meaningful difference.
What about the read-only simple test (100K requests)?
As a large part of the database is cached into the system ram, we again see a tie.
Now, the complex, read-write, transaction-heavy (10K requests) test:
Again, no real differences.
Will another PostgreSQL benchmark, PgBench (scale=100, transaction per client=100), call for a different results?
Again, we see only very slight variations.
Do these results means that the selected I/O scheduler is irrelevant to good disk performance? Not necessarily. Remember that both sysbench and pgbench focus on a limited workload type. Perhaps a more real-world scenario would highlight stronger variations. In the same way, a system with a beefier disk subsystem (or with an hardware RAID controller) can react in a different manner. Anyway it is clear that with a very fsync() heavy workload, chances for I/O optimization at the software level are rather low.