EXT3 vs EXT4 vs XFS vs BTRFS filesystem comparison on Fedora 18

Written by Gionatan Danti on . Posted in Linux & Unix

User Rating:  / 44
PoorBest 

Testbed and methods

Benchmarking filesystems is not an easy task for many reasons: there are endless usage scenarios, each with their specific requirements and usage patterns; each filesystem has its custom options that can modify its behavior considerably; different kernel releases can produce different benchmark results.

So, in order to give you consistent, reproducible results, I had to do some very important choices about the benchmarks, the options and the kernel to use. The benchmark suite is composed of some theoretical and real world tests:

  • sysbench (version 0.4.12) is a semi-synthetic test useful to benchmark database performance;
  • fs_mark (version 3.3) is a synthetic test aimed at measuring file creation speed;
  • postgresql (version 9.2.4) and mysql (version 5.5.31) are the two benchmarked database systems;
  • tar/untar and cat are representative of common, real world usage patterns;
  • finally, filefrag is a very helpful filesystem utility used to evaluate file fragmentation.

All filesystems where created with default options (which imply that write barriers were enabled for all FS), using the mkfs Linux utility. In detail, mount options as shown by the mount command were:

  • ext3: /dev/sda3 on /opt type ext3 (rw,relatime,seclabel,data=ordered)
  • ext4: /dev/sda3 on /opt type ext4 (rw,relatime,seclabel,data=ordered)
  • xfs: /dev/sda3 on /opt type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
  • btrfs: /dev/sda3 on /opt type btrfs (rw,relatime,seclabel,space_cache)

Files fragmentation level was checked after sysbench and fs_mark large-file test.

All test were runs on a Dell D620 laptop. The complete system specifications are:

  • Core2 T7200 CPU @ 2.0 GHz
  • 4 GB of DDR2-667 RAM
  • Quadro NVS110 videocard (used in text-only mode)
  • a Seagate ST980825AS 7200 RPM 80 GB SATA hard disk drive (in IDE compatibility mode, as the D620's BIOS does not support AHCI operation)
  • O.S. Fedora 18 amd64 with kernel version kernel-3.6.10-4.fc18.x86_64

The internal hard disk was partitioned into three slices: a first ~9 GB ext4 partition for the root directory, a second ~4 GB partition for the swap file and a third ~60 GB partition (mounted on /opt) for testing purposes. CPU frequency/voltage scaling was disabled and the system was used in text-only mode (no X running here). All benchmarked services (basically postgresql and mysql) were reconfigured to store their data into the 60 GB testing partition mounted on /opt.

Comments   

 
#1 Jan 2013-06-27 15:42
Thanks for the test. It seems, as if btrfs is a bit Janus-faced: sometimes very fast, sometimes very slow.

It would be very interesting for future tests, how ZFS on Linux performs. Especially, after it has become "productive" some weeks ago.
 
 
#2 Gionatan Danti 2013-06-27 16:12
Hi Jan,
this is surely a good idea ;)

I will investigate this possibility for the next review.

Regards.
 
 
#3 Altr 2013-12-21 10:49
Thank you for the benchmarks!
 
 
#4 Iván Baldo 2014-04-14 03:12
For databases or virtual machine images, you should disable the copy-on-write semantics of BTRFS.
You don't need to set the entire filesystem to be non COW, only the directory and files that need it (chattr -C flag).
 
 
#5 Gionatan Danti 2014-04-14 09:39
Quoting Iván Baldo:
For databases or virtual machine images, you should disable the copy-on-write semantics of BTRFS.
You don't need to set the entire filesystem to be non COW, only the directory and files that need it (chattr -C flag).


Hi Ivan,
you are right. Anyway, I benchmarked BTRFS even with disabled CoW and found that, for virtual machines at least, it performs noticeably worse than a traditional filesystem as EXT4.

You can read more here:
http://www.ilsistemista.net/index.php/linux-a-unix/36-btrfs-mount-options-and-virtual-machines-an-in-depth-look.html

The only catch is that both tests are somewhat old now, being performed on Fedora 17 and 18. I should really see if with newer kernels BTRFS performances are better now.

But I have so little time ;)
 
 
#6 Dan 2015-10-09 06:37
It's two years later, but this post still comes up tops on a search and the conclusions is outdated. Critical data requires historical snapshots AND backups (preferably backups of the historical snapshots). Big critical data requires applications that are well written to do atomic transactions leaving the disk always consistent (assuming the fs supports it) and then requires atomic backups leaving the backups consistent. This simply is NOT achievable (ie impossible) with ext3 or ext4. Adding some small extra stability risks to the many risks that already exist is a small price to pay for proper protection from those risks. NTFS has had shadowcopies for ages by the way. Actually building a backup scheme to leverage these tools in a smart way isn't so simple, but it's worth doing.
 

You have no rights to post comments