The EqualLogic Multipathing Extension Module for VMware vSphere is a Path Selection Plugin (PSP) in the VMware Pluggable Storage Architecture (PSA). It also includes a connection manager (EHCM) to manage iSCSI sessions to the EqualLogic array.

The EqualLogic Multipathing Extension Module for VMware® vSphere enhances the native multipathing capabilities of VMware vSphere 4.1 when used with an EqualLogic PS Series SAN.

  • Automatic connection management
  • Automatic load balancing across multiple active paths
  • Increased bandwidth
  • Reduced network latency

So, it provides a more knowledgable way (because it communicates with the array and knows about the tiered volumes) to do load balancing, which improves I/O performance. This means that (by using the EHCM in a CIM provider), the PSP routes I/O to the most optimal path.

Requirements

You should be running PS series firmware 4.0.7 or later (I recommend version 5.1.0) and have VMware vSphere 4.1. Also, make sure the network between the host and storage array is non-routed; ESXi MPIO cannot handle a routed  network. You should be able to put the host in maintenance mode and reboot it after installation for the MEM to kick in.

Software or Dependent Hardware iSCSI?

Software iSCSI is fully supported.

The EQL MEM supports the Broadcom NetXtreme II with iSCSI offload, otherwise known as dependent hardware iSCSI. There’s (still) no Jumbo Frames though, and the maximum number of sessions created from a single adapter is 64.

Because some HBAs do not support iSCSI session management, there’s no performance benefit associated with the MEM in that situation.

Configuring the networking stack

When you’re using software iSCSI, you’ll need to configure networking first.

  • Create a separate vSwitch for iSCSI traffic with Jumbo Frames enabled
  • Decide on the number of VMkernel ports. Assume a (n+1):n relationship between VMkernel ports and vmnics. If you want to dedicate two physical adapters to iSCSI, use three VMkernel ports. For four physical adapters, use five. I’ll explain why in a bit.
  • Create VMkernel ports (with Jumbo Frames enabled)
  • Configure the 1:1 relationship between vmk# and vmnic# (remove the extra uplinks) for the second and third (and any additional VMkernel ports). Don’t do this for the first VMkernel port.
  • Enable the software iSCSI Initiator
  • Bind the VMkernel ports to the iSCSI adapter (with esxcli swiscsi nic add –nic vmk# –adapter vmhba##) for the second and third (and any additional VMkernel ports). Don’t do this for the first VMkernel port.

Why do we want  ‘n+1’ for for the VMkernel port? Why not just settle on a 1:1 relationship? The problem is, if the physical switch to which the first VMkernel port in the iSCSI subnet is connected fails, iSCSI traffic can be affected, even though you’re using multiple physical switches and have the iSCSI stack set up accordingly. This occurs because the first VMkernel port in the iSCSI subnet is used for the default route. The EqualLogic MEM uses ICMP ping to test for connectivity between host and SAN member. If the switch is down, ICMP pings cannot be routed back through the first VMkernel port and the MEM would not be able to rebuild the iSCSI session.

In the ‘Known Issues and Limitations’ section of the User Guide, Dell actually lists this problem:

Failure On One Physical Network Port Can Prevent iSCSI Session Rebalancing
In some cases, a network failure on a single physical NIC can affect kernel traffic on other NICs. This occurs if the physical NIC with the network failure is the only uplink for the VMKernel port that is used as the default route for the subnet. This affects several types of kernel network traffic, including ICMP pings which the EqualLogic MEM uses to test for connectivity on the SAN. The result is that the iSCSI session management functionality in the plugin will fail to rebuild the iSCSI sessions to respond to failures of SAN changes.

The solution is to configure the first VMkernel port to be connected to all physical uplinks (and thus all physical switches) for the iSCSI vSwitch. Because you bind multiple physical uplinks to the VMkernel port, you cannot bind it to the software iSCSI Initiator and it won’t do any iSCSI I/O. It’ll be a VMkernel port for ICMP ping only. iSCSI traffic will flow through the second and third (or even fourth and fifth if you’re using four physical uplinks) VMkernel ports. I think it was Duco Jaspars who came up with the solution for this issue a couple of months ago.

Installation

Installation is quite easy as EqualLogic has been so kind to include a ZIP-file which can be used with either Update Manager, the vMA or the vSphere CLI. I highly recommend using Update Manager to install the PSP, as this is by far the easiest way.

EqualLogic has also given us setup.pl, an installation and configuration script. There’s three parts to this script, ‘installation’, ‘configuration’ and ‘set parameters’. I’ll ignore the ‘installation’ part as we’re using Update Manager. The second part would save some time configuring the vSwitch, VMkernel ports, uplinks and software iSCSI Initiator, but as we’re using a slightly different setup (the first VMkernel port), this won’t help us much either.

Editing ehcmd.conf

That leaves us with the third part, ‘set parameters’. By default, the EHCM creates two sessions to a volume slice (portion of a volume on a single member). In configurations with four vmnics (four VMkernel ports and a fifth for ICMP), you’ll want to bump this number up to four sessions to take full advantage of the four physical links. Use setup.pl –setparam for this:

setup.pl --setparam --name="membersessions" --value="4" --server="esxi"

While you’re at it, increase the total number of sessions to a volume to 12:
setup.pl --setparam --name="volumesessions" --value="12" --server="esxi"

Instead of setup.pl, you can manually edit /etc/cim/dell/ehcmd.conf and make the same adjustments.

Resources