A recap on the history of virtualization

Full virtualization or binary translation allows an unmodified guest OS to run as a virtual machine. The earlier VMware products used this technique to allow for virtualization. This is the oldest of the virtualization methods, and also the slowest. Up until the arrival of hardware-assisted virtualization, full virtualization was the most versatile way to get things done.

The second option is to use para-virtualization. This uses the same binary translation, although a lot less has to be translated. This requires the guest OS to be heavily modified to be aware of the fact that it is virtualized. As many guest Operating Systems were not modified to work with para-virtualization, the improved speed (as compared to full virtualization) did not win many people over: the platform was way too restrictive for mainstream adoption. Later VMware-products used VMI for para-virtualization. VMI has retired this technique recently. Xen is the leader in para-virtualization, with the para-virtualization modules built into many open-source (UNIX and Linux) operating systems today.

With these two techniques in place, virtualization became more and more mainstream. Processor manufacturers began to feel the pressure by the market to speed up virtualization. They began to implement many of these binary translations into the processor, calling them ‘Hardware Assisted Virtualization’. AMD-V and Intel VT are the most commonly used implementations of the platform. Hardware Assisted Virtualization is the commonly used platform for virtualization, giving customers the best of both worlds: the speed of para-virtualization (without compromising on modifying the guest OS) and the easy of use of full virtualization. Hardware Assisted Virtualization has therefore become the industry standard, and the platform is improving all the time. More and more features are being implemented, making virtualization even faster and more flexible. Hardware Assistance is reaching out to other forms of virtualization. VMDirectPath allows for virtualization of I/O devices, such as Network Interface Cards or Storage Controllers.

Future directions of the hypervisor

With the move to even heavier integration of the virtualization layer into the processor and other pieces of hardware, would it be wierd to think that the hypervisor as we know them now will disappear? From a performance point of view, I’d like my entire Virtual Machine Monitor to dissipate into the hardware. I’d call it Hypervisor-on-die. This could evolve from a modular chip, separated from the CPU itself (like the old memory controller was separated from the CPU) to a hypervisor completely integrated in the CPU’s instructionset. This hypervisor would run as an instructionset, like AMD-V and Intel VT do nowadays. This would require that the current architecture of a hypervisor would have to be pulled apart and heavily optimized.

Future directions of hypervisor management

Consider how management of VMware ESX is going to be. VMware has already stated that ESXi is the future. ESX with its Service Console is going to die eventually. Management functions are going to be executed on a freely movable and independent “Service Console”, now called the vMA. This reduces the number of excess weight each hypervisor has to carry around, while still being able to do all management tasks. But with the hypervisor completely embedded into the CPU-architecture, how will it be managed?

A first and simple way is using the vMA like it exists today. But consider the physical Remote Access Cards (like the Dell iDRAC and HP iLO). These cards are parasites feeding of the hosts’ capabilities, used to manage the hardware they live on. What if the vMA is evolving into a similar parasite, integrated into the hardware, using independent hardware? This would mean that there’s no requirement for separate software products, allowing for even tighter integration of the hypervisor and management software into the hardware.


With this post, I tried to work out a possible scenario of the future of virtualization on a technical level. Do you think this is a feasible way that the hypervisor and its management tools might evolve?