Linux software RAID: RAID 5 vs RAID 10 performance and other RAID levels

Written by Gionatan Danti on . Posted in Linux & Unix

User Rating:  / 22

HINT: if you are interested in the quick & dirty benchmarks only, go to page #4

It is not a secret that while processors, memories and peripherals have constantly increased their speed (sometime in a more then linear manner), the commons storage subsystems where constantly lagging behind the other components.

This is hardly a surprise, considering that the great majority of the commons permanent data storage systems are based on mechanical (rather than electronic) devices. This mechanical nature intrinsically mean that these devices are way slower than processors and other electronic devices. For example, consider the probably most common storage media: the hard disks. While these devices have grown in capacity and offer an outstanding space/cost ratio (today you can buy an high-quality 2 TB disk for less than 200€, while 10 years ago you had to pay about the same money for a 20 GB disk) their speed evolved with a much, much lower rate. This is indeed due to the fact that these mechanical devices have two moving parts: the rotating platters (activated by an electric motor) and the heads (moved around by an actuator).

So, while high capacity guarantee you high sequential read and write speed (because the platter's areal density was improved tremendously over time), random read and write speed are only a little fractions of the maximum theoretical speed and are only a little better than those of a 10 years old disk. At the same time these moving parts also imply that hard disk are prone to fault with an order of magnitude (or more) greater that other pure electronic parts.

The Phenom / PhenomII memory controller: ganged vs unganged mode benchmarked

Written by Gionatan Danti on . Posted in Hardware analysis

User Rating:  / 94

HINT: if you are interested in the quick & dirty benchmarks only, go to page #4

It is not a secret that processor performance grow at a very fast rate, faster that any other PC / server component. This disparity challenged CPU designer, as they had to create faster processor that are impacted from the slower system components as little as possible.

One of these system components, and one that can have a great influence on processor speed, is the Random Access Memory, or RAM in short. In the past years, there was a lot of effort to raise the RAM speed: in less that a decade, we went from 133 Mhz SDR DIMM RAM to 1333 Mhz DDR3 DIMM RAM, effectively increasing bandwidth by a factor of 10X. If you consider that modern PC and server platforms uses two or more memory channels, you can quickly realize the improvements in memory speed over the last ten yers.

However, CPU performance go up at an ever faster rate. Also, while memory bandwidth has improved tremendously, memory latency has improved by a factor of 2X or 3X at most. So, while todays RAMs are quite fast at moving relatively large data chunks (they have a burst speed in the range of 6.4 – 12.8 GB/s for DIMM module), their effective access latency remain at around 40/50 ns. So, RAM speed can seriously influence CPU speed.

For example, consider the FSTORE unit on Phenom / PhenomII CPU: it can output a canonical 64 bit-wide x87 register each clock, and it is clocked at around 3.0 Ghz. A simple math reveal that in the optimal conditions, one single core of a 3.0 Ghz Phenom / Phenom II processor can store floating point data at around 24 GB/s. Considering that the Phenom II x4 940 has four core, a single processor can write floating point data at a peak of 96 GB/s! And this is only part of the story, as the integer input/output rates are almost double. Compare these values to the peak bandwidth delivered by a single memory module and you can realize that today processors can be really limited by memory bandwidth.

Vmware vs Virtualbox vs KVM vs XEN: virtual machines performance comparison

Written by Gionatan Danti on . Posted in Virtualization

User Rating:  / 166

Today, “Virtual machine” seems to be a magic words in the computer industry. Why? Simply stated, this technology promise better server utilization, better server management, better power efficiency, and, oh yes, some other random pick of better things! The obvious question is if virtual machines technology really provide this better experience. In short: yes. While it has its set of problems and complications, when used correctly this technology can really please you with some great advantages over the one-operating-system-for-one-server paradigm vastly used in the x86 arena.

But, assumed that virtual machines make sense in your environment, what is the best virtualization software to choose? There are many virtualizer and paravirtualizer available today, and some once-commercial virtualization softwares are now freely released (for examples, think to VMware Server and Citrix XenServer). In the end, the choice can be very hard. As the remaining commercial, non-free virtualizator are designed for the upper end of the market (datacenters or large-sized corporates), this article will focus on available, free virtual machine softwares. So, be prepared to a furious battle between VMware vs Virtualbox vs KVM vs Xen!