KVM I/O slowness on RHEL 6
- Details
- Written by: Gionatan Danti
- Category: Virtualization
Over one year has passed since my last virtual machine hypervisor comparison so, in the last week, I was preparing an article showing a face to face comparison between RHEL 6 KVM technologies versus Oracle VirtualBox 4.0 product. I spent several days creating some nice, automated script to evaluate these two products under different point of views, and I was quite confident that the benchmark session would be completed without too much trouble. So, I installed Red Hat Enterprise Linux 6 (license courtesy of Red Hat Inc. - thank you guys!) on my workstation and I begin the virtual images installation.
However, the unexpected happened: using KVM, a Windows Server 2008 R2 Foundation installation took almost 3 hours, while normally it should be completed in about 30-45 minutes. Similarly, the installation of the base system anticipating the “real” Debian 6.0 installation took over 5 minutes, when normally it can be completed in about 1 minute. In short: the KVM virtual machines were affected by awfully slow disk I/O subsystem. In previous tests, I saw that KVM I/O subsystem was a bit slower, but not by so much; clearly, something was impairing my KVM I/O speed. I tried different combination of virtualized disk controllers (IDE or VirtIO) and cache settings, but without success. I also changed my physical disk filesystem to EXT3, to avoid any possible, hypothetical EXT4 speed regression, but again with no results: the KVM slow I/O speed problem remained.
Published Kal-El (TEGRA3) performance: is NVIDIA SoC truly faster than a Core2 ?
- Details
- Written by: Gionatan Danti
- Category: Hardware analysis
All peoples involved in IT and computer technologies in the last decade know very well NVIDIA: this graphic & compute chips design company reached many important target and set new state-of-the-art performance in about any marked where it operated.
Some day ago, NVIDIA uncovered its next-generation SoC project, codenamed Kal-El. This SoC is going to set new performance standard in its area, featuring four ARM Cortex A9 core (each with NEON support, thanks to the integrated MPE) and a renewed, 12 core wide graphic controller. Anandtech did a great job in explaining Kal-El architecture, so I advice anyone interested in SoC performances to read his article here: http://www.anandtech.com/show/4181/nvidias-project-kalel-quadcore-a9s-coming-to-smartphonestablets-this-year
However, NVIDIA did not limit itself to announce its new hardware beast, but showed some benchmark numbers to prove the superior speed of Kal-El. The used benchmark was CoreMark 1.0 (you can read about it here: http://www.coremark.org/home.php), a synthetic ALU and FPU benchmark. Kal-El performs very well here, doubling Tegra2 performance: this is an extremely great accomplishment, as the tested Kal-El silicon was only 12 days old, and this is a testament to NVIDIA's ability to design top-notch chip.
GTK 2 and general Linux graphics performance analysis
- Details
- Written by: Gionatan Danti
- Category: Linux & Unix
Modern operating systems are very complex piece of software: the provide many integrated functionalities, wrapping them onto a very nice desktop environment. In the latest years, advancements in computer speed let developers concentrate more on the realization of a consistent, easy-to-use and very attractive graphical user interfaces (GUIs), with many eye-candy effects turned on by default.
However, while in the latest years aggregate computer speed (throughput) has steady increased more or less at the rate predicted by the Moore law (note: this law primary refers to the number of integrated transistors, but I see it used everywhere as a performance meter also) single thread performance (latency) did not have this radical performance increment. This means that complex, mostly single thread applications as the GUI and, by extension, the entire graphical stack have to be very careful about their speed: if the user feel the GUI and/or other graphics slow, he will have a quite bad experience using the machine.
So, a fast and responsive GUI and graphics system are of the utmost importance for a comfortable use of a desktop computer. On the other hand, this kind of applications are quite hard to program, because the hardware give you no facilities to create nice graphics. So, while in the old days (20 years ago) GUIs and OS graphics were programmed in assembly language, today the hardware is abstracted under various libraries levels.
EXT3 vs EXT4 vs XFS vs BTRFS linux filesystems benchmark
- Details
- Written by: Gionatan Danti
- Category: Linux & Unix
In the latests years, there was considerable ferment in the Linux community regarding which filesystem is best suited to accomplish its goal – to organize your files. In endless discussion, we can read about the alleged superiority of one filesystem over another one, however often these statements lack some objective data points to prove that.
As a typical Linux user can choice from a plethora of very different filesystems, I would like to give you some number to compare them. In this article, we will focus on the performances of these filesystems:
-
ext3, which was the “standard” Linux filesystem since almost a decade;
-
ext4, the high-anticipated ext3 successor;
-
xfs, an high-performance filesystem originally developed by Silicon Graphics, Inc. for the IRIX operating system;
-
btrfs, a new, next-gen filesystem developed with scalability in mind.
X86-64 and SSE2 performance on John the ripper
- Details
- Written by: Gionatan Danti
- Category: Hardware analysis
I'm sure that you hear something similar to “software is always behind processors features”, in the sense that it usually require many years because a new hardware feature is actively used by common, well spread software.
We had many example of this trend: for example, while the i386 processor bring 32 bit computing and other advanced capability to the x86 world in year 1985, the first real 32 bit operating system from Microsoft was Windows NT 3.1, released in 1993, a full 8 years gap. Obviously, some time must pass from hardware support to software support, simply because you need time to write your software – and you can do that only when you have a working hardware. So, a certain gap is not only understandable, but often inevitable.
Page 8 of 9