In my previous post I promised a multi-post detailed story about the whole Microsoft vs. VMware thing. As the list goes on and on, I had to break this up into two posts to remain readable. There was just to much to tell!

Let’s jump back onto the horse:


Snapshots can be made in both VMware ESX and Microsoft Hyper-V. However, with Hyper-V, a couple of minor differences make snapshotting a dangerous thing. When you select to remove a snapshot from a VM, Hyper-V doesn’t immediately consolidate the delta files into the virtual disk file, but does it only when the VM is powered off. In itself, this is a great thing: the VM doesn’t suffer from a performance penalty when removing the snaphot. However, with SAN space still being expensive, SAN administrators are conservative in handing out space. Extra space is required to remove the snapshot. Combine the two, and you’ll have a potential lethal situation for the VM’s: data corruption is a very real possibility.

As explained here, this difference in removing snapshots can actually require administrators to power off a VM twice as much on patch Tuesday:

  1. Take VM snapshot
  2. Apply updates, which normally require that you…
  3. Reboot
  4. Test
  5. Remove snapshot

(…) The 5 steps above are only for VMware ESX.  If you are using Hyper-V as your platform, you are not done yet — the snapshot differences are not actually merged until the VM is powered off.

Hyper-V administrators, please add the following steps:

  1. Power off VM
  2. Wait for merge to finish
  3. Power on VM

ISO copying

One great feature of ESX is the the flexible way to handle ISO images. ISO’s can reside on a FC or iSCSI SAN, a NFS NAS or locally on VMFS-volumes. These ISO’s are shared between all ESX hosts that have access to the datastore. No need to copy an ISO to every VM’s working directory. Imagine a Microsoft Hyper-V / SCVMM environment with hundreds of VM’s, all having a copy of the four gigabyte ISO file. Eric Gray has written a great satire on the subject.

As well as copying terabytes of identical ISO’s everywhere on VMware vCenter and ESX, SCVMM doesn’t even completely support ESXi. Specifically, copying ISO’s to ESXi generates all kind of errors.

Guest Operating System Support

While VMware has an impressive list of supported Guest Operating Systems, Microsoft supports only two. One and a half, actually, as Novell SuSe Linux Enterprise Server support is crappy, incredibly crappy.

Recovering VM’s

Did I mention that the small details make a world of difference? Imagine a virtualization host that just stops working, *poof*. Sadly, you haven’t got any backups, as it is just your testing environment. After fixing the host, you reinstall the hypervisor, and attach the disks to the server.

With VMware, you can just add the VM to the inventory using the datastore browser in the VIClient. With Hyper-V, however, the process is, shall we say, a little embarrassing:

A VM has to be explicitly exported before being imported. As you did not do this (and why should you, you didn’t expect the host to fail), there is no supported method of importing a VM.

Hyper-V VM import failed.

About drivers and HCL’s

VMware has been much critized for having a strict Hardware Compatibility List, and thus supporting little hardware, esspecially when compared to Microsoft Windows. What most people forget, that virtualization projects usually (if not always) include new hardware. A wide range of new servers from Dell, HP, IBM, Sun, etc are supported by VMware, and thus this ‘strict’ HCL will not affect 99,99% of people out there.

At the same time, this widely supported 3rd party driver model is just the reason why I cannot believe Hyper-V will be totally stable. Even with all the signed drivers, testing and so forth on current Windows versions, it happens every so often that a driver kills the running OS. I wouldn’t be surpised if a driver for Hyper-V kills an entire host and its VM’s.

This 3rd party driver support model has some other drawbacks too: Microsoft has to rely on these third parties to develop features in for instance network card drivers to support stuff like nic-teaming and VLAN support. You could very well end up buying a brand new server, only to find that the device doesn’t support nic teaming, rendering your implementation of a fast iSCSI SAN useless…