Virtualization
- Details
- Written by: Super User
- Category: Virtualization
OK, I'm not the first to talk about Windows and ACPI shutdown: as a simple Google Search shows, many sites / blogs have talked about it in the past years. Why it is such a big deal, when we can simply press the “shutdown now” button that Windows show us? Because with the rise of virtual machines it become quite important to be able to automatically execute a clean and fast shutdown of all running guests, without the need to manually login in each one. Theoretically, all you need to do is to issue an “ACPI shutdown” command to the running guests and voilà – it should respond to the ACPI event initiating a proper OS shutdown. However, things rarely are so simple: by default, the various Windows versions respond to ACPI events with different behavior – sometime doing nothing.
- Details
- Written by: Gionatan Danti
- Category: Virtualization
Virtual machines storage performance is a hot topic – after all, one of the main problem when virtualizing many OS instances is to correctly size the I/O subsystem, both in term of space and speed.
Of course performance is not the only thing to consider: another big role is played by flexibility and ease to use/configure. So the perfect storage subsystem is the right compromise between performance, flexibility and ease to use.
In a somewhat ironic manner, of the three requisites written above, performance is the most difficult thing to measure, as“I/O speed” is an ephemeral concept. It is impossible to speak about I/O performance without taking into account three main parameters: I/O block size, I/O per seconds (IOPS) and queue depth (the number of outstanding, concurrent block requests). This represent the first problem to correctly size your disk subsystem: you had to guess about the expected access pattern, and you better guess right.
- Details
- Written by: Gionatan Danti
- Category: Virtualization
In the latest ten years, full-virtualization technologies gained much traction. While this sometime led to an excessive virtual machines proliferation, the key concept is very appealing: as CPU performance and memory capacity relentless grow over time, why do not use this ever-increasing power to consolidate multiple operating system instances on one single, powerful server?
If done correctly (ie: without an unnecessary grow of total OS instances), this consolidation process bring considerable lower operating costs, both from electricity and maintenance/administration standpoints.
However, in order to extract good performance from virtual machines, it is imperative to correctly size the host virtualizer: CPU, disk, memory and network subsystems should all be capable to sustain the expected average workload, and also something more for the inevitable usage peeks.
- Details
- Written by: Gionatan Danti
- Category: Virtualization
As you probably already know, there are basically two different schools in the virtualiztion champ:
- the para-virtualization one, where a modified guest OS uses specific host-side syscall (hypercall) to do its “dirty work” with physical devices
- the full hardware virtualization one (HVM), where the guest OS run unmodified and the host system “traps” when the guest try to access a physical device
The two approach are vastly different: the former requires extensive kernel modifications on both guest and host OSes but give you maximum performance, as both kernels are virtualization-aware and so they are optimized for the typical workload they experience. The latter approach is totally transparent to the guest OS and often do not require many kernel-level changes to the host side but, as the guest OS is not virtualization aware, it generally has lower performance.
- Details
- Written by: Gionatan Danti
- Category: Virtualization
Almost one year ago, I checked how different cache settings affected KVM storage subsystem performance. Results were very clear: to obtain good I/O speed, you had to use the write-back or none cache policies, avoiding the write-through one. However, as the write-back policy intrinsic comported some data-loss risk, the safer bullet was to not use any host-based cache (the “nocache” KVM option).