I’ve attended the breakout session about long distance VMotion, TA3105. This sessions presented results from a Long Distance VMotion Joint Validation research project done by VMware, EMC and Cisco in the form of Shudong Zhou, Staff Engineer for VMware, Balaji Sivasubramanian, Product Marketing Manager for Cisco and Chad Sakac, VP of VMware Technology Alliance for EMC.
What’s the use case?
With Paul Maritz mentioning the vCloud a lot, I can see where WMotion can make itself useful. When migration to or from any internal or external cloud, there’s a good chance you’d want to do so without downtime, i.e. with all your virtual machines in the cloud running.
What are the challenges?
The main challenge to get VMotion working between datacenters isn’t with the VMotion technology itself, but with the adaptations to shared storage and networking. Because a virtual machine being VMotioned cannot have it’s IP address changed, some challenges exist with the network spanning across datacenters. You’ll need stretched VLAN’s, a flat network with same subnet and broadcast domain at both locations.
The same goes for storage. VMotion requires all ESX hosts to have read/write access to the same shared storage, and storage does not go well over (smaller) WAN links.There need to be some kind of synchronisation, or a different way to present datastores at both sides.
Replication won’t work in this case, as replication doesn’t do active/active access to the data. The secondary datacenter doesn’t have active access to the replicated data, it just has a passive copy of the data, which it can’t write to. Using replication as a method of getting WMotion working will result in major problems, one of which is your boss making you stand in the corner for being a bad, bad boy.
What methods are available now?
Chad explained a couple of methods of making the storage available to the secondary datacenter:
This is the most simply way to get WMotion up-and-running. This method entails a single SAN, with datastores presented to ESX servers in both datacenters and doing a standard VMotion. Doing this will leave the virtual machine’s files at the primary site, and as such, is not completely what you’d want, as you’re not independent from the primary location.
Storage VMotion before compute VMotion
This method will do a Storage VMotion from a datastore at the primary location to the secondary location. After the Storage VMotion, a compute VMotion is done. This solved the problem with the previous method as the VM will move completely. It will take a lot of time, and does not (yet) have any improvements that leverage the vStorage API for for instance deduplication.
Remote VMotion with advanced active/active storage model
Here’s where Chad catches fire and really starts to talk his magic. This method involves a SAN solution with additional layers of virtualization built into it, so two physically separated heads and shelves share RAM and CPU. This makes both heads into a single, logical SAN, which is fully geo-redundant. When doing a WMotion, no additional steps are needed on the vSphere platform to make all data (VMDK’s, etc) available at the secondary datacenter, as the SAN itself will do all the heavy lifting. This technique will do it’s trick completely transparent to the vSphere environment, as only a single LUN is presented to the ESX hosts.
What’s VMware’s official statement on WMotion?
VMotion across datacenters is officially supported as of September 2009. You’ll need VMware ESX 4 at both datacenters and a single instance of vCenter 4 for both datacenters. Because VMware DRS and HA aren’t aware any physical separation, WMotion is supported only when using a cluster for each site. Spanned clusters could work, but are simply not supported. The maximum distance at this point is 200 kilometers, which simply indicates that the research did not include any higher latencies than 5ms RTT latency. The minimum link between sites needs to be OC12 (or 622Mbps). The bandwidth requirement for normal VMotions, within the datacenter and cluster will change accordingly. The network needs to be stretched to include both datacenters, as the IP-address of the migrated virtual machine cannot change.
There need to be certain extensions to the network for WMotion to be supported. Be prepared to use your credit card to buy any Cisco DCI-capable device like the Catalyst 6500 VSS or Nexus 7000 vPC. On the storage side, extensions like WDM, FCoIP and optionally Cisco I/O Acceleration are required.
- Single vCenter instance managing both datacenters;
- At least one cluster per site;
- A single vNetwork Distributed Switch (like the Cisco Nexus 1000v) across clusters and sites;
- Network routing and policies need to be synchronized or adjusted accordingly.
- Check out the PDF “Virtual Machine Mobility with VMware VMotion and Cisco Data Center Interconnect Technologies“
- Information about VMware MetroCluster
- Information about the Cisco Datacenter Interconnect (DCI)