Recently, at Tech.Unplugged, I had a couple of great conversations and sat in a couple of cool sessions. During @hansdeleenheer‘s session on Hype-Convergence, @julian_Wood triggered me with this comment:

We discussed what makes HCI such a hot and interesting topic right now and besides all the hyped-up attention, there’s actually a lot of technical merit to the concept. Let’s set aside the advantages (or disadvantages) of the shared-nothing distributed architecture for now, and focus on what the bundling of hardware, hypervisor, storage (and a little bit of networking) can deliver for a customer.

Let the big boys handle the big guns

First off, by having a single vendor think out the architecture, the integration, the implementation and the support, you as a customer don’t have to deal with all this complexity. While the complexity isn’t really going away, you’re paying your HCI vendor to solve those issues for you in an effective way. The vendor can choose to use only a very limited set of different pieces of hardware and software, thus driving down the number of variables they have to take into account. This should, and usually will, enhance the quality of the product. They also get to solve issues down the road much more easily, again because of this limited number of variables to test and patch for.

Why I love Hyper-Converged

Julian and myself concluded the discussion with this remark:

And that is really why I love hyper-converged: by having such a tightly integrated stack, the vendor is able to push the envelope of what they’re capable of doing for customers, and they’re now stepping into the automation and orchestration of common operational procedures. A great example in this space is Nutanix with their one-click upgrade system. They used to have it only for their storage layer, but now they’ve expanded into the hypervisor. They will upgrade supported hypervisors (ESXi and Hyper-V for now) with the same ease. Even though it’s not anything magical (although, if you look at how hard it is with VMware’s VUM; it might actually be…), it’s just so much easier for Nutanix to build out and support such a feature than it is if customers do it themselves. Even with the tight integration and low variance, it’s so good to actually see Nutanix think from an operational perspective: they also support this for hardware device firmware and BIOS; traditionally one of the hardest parts of any infrastructure to keep up to date and consistent.

Push the envelope

I wonder how far HCI vendors can push this envelope, and how soon we’ll see these kinds of operational optimizations in adjacent fields, such as virtualized networking and application provisioning.

Right now, companies like Nutanix are automating mostly infrastructure-related operations, such as the upgrade of firmware, BIOS, hypervisor and storage layer. I’m very curious to see what kind of automation (and commoditization) is up next. One obvious bit of low hanging fruit is network provisioning, which is greatly helped by the recent push in network virtualization.

But as always, it’s the applications that give us the hardest time, both in terms of complexity and time. I’m guessing this is the next holy grail: standardize application deployment, configuration and lifecycle management so operational management can be automated from within a HCI solution. This is where containers come in. Coincidentally, also a hot topic at Tech.Unplugged, and I highly recommend you watch Nigel’s presentation on containers from that event. I agree with Nigel that VM’s are a necessary evil at this moment to support legacy, monolithic apps, but by themselves, virtual machines and the Guest OS running inside are completely useless. They do, however, take up precious time, add complexity, cause down-time, etc.

HCI + Containers = next step

Containers are going to help us minimize the interaction with the VM and Guest OS, making those layers pretty much stateless and volatile, instead of keeping them alive as long as technically possible. I bet you know an example or two of workloads that started as a physical machine, were P2V’ed, their Guest OS and application upgraded multiple times and migrated from old to new virtual environments, adding up to a 5+ year lifecycle. That’s pretty insane, to be honest!

Integrating container technology in existing HCI stacks (alongside hypervisor-based technology) could really help the transition from old monolithic legacy apps to new microservice oriented application architectures, all from the same infrastructure management.

So I guess the whole point of this post is to emphasize that what HCI vendors are doing right now (commoditize not only hardware and software, but operational procedures for infrastructure operations as well), could very well be a ramp-up for containers as a helping hand in commoditizing application lifecycle management. I hope vendors like Nutanix, VMware and Microsoft will embrace the combination of these two trends soon to push IT into a whole new era of how we, as IT infrastructure guys, work with the application layer and finally do away with VM’s, clunky Guest OS’es and monolithic applications.