The dust is starting to settle after this week’s VMworld. For me, the biggest announcement was Project Pacific: embedding the Kubernetes control plane directly into vSphere.

VMware has seen the future is Kubernetes and acknowledged vSphere’s shortcomings to get to that future state. Project Pacific is not a superficial, ‘look at us we do Kubernetes’ bolt on. It’s re-architecting ESXi and vCenter to natively use the Kubernetes APIs to drive platform operations.

My technical heart certainly beats a little faster after reading up on the technical details of the Project. In short, there’s four reasons for this:

  1. VMware is integrating the Kubernetes control plane (called the ‘Supervisor Cluster’ in vSphere terminology) directly into vSphere.
  2. This means developers and IT admins can use the native Kubernetes APIs to consume virtual machines and containers alike. This is what Kubernetes is meant to be: a scheduler of resources.
  3. Secondly, Guest Clusters are vanilla, up-stream Kubernetes, deployable via the Supervisor cluster. This is like AKS, EKS or GKE. Very similar to GKE-on-Prem and Anthos. But free.
  4. Thirdly, running containers natively on vSphere. Containers are now first-class citizens in the vSphere world.

Let’s look at each of these through an Enterprise IT lens.

The Kubernetes-in-vSphere Architecture of Project Pacific

Project Pacific is ‘Kubernetes in vSphere’. VMware has completely and natively integrated Kubernetes into vSphere. This is called the Supervisor Cluster, and it is a special version of Kubernetes, with a number of modifications to make it work well with vSphere. While most developers and workloads won’t care, these modifications are not vanilla, upstream Kubernetes.

Enabling Kubernetes ‘as a feature’ per ESXi cluster, for free, protects and prolongs existing investments in on-prem infrastructure and vSphere. It helps IT admins transition to this new world a little more smoothly while giving developers the freedom to consume containers without cloud.

Project Pacific architecture
Project Pacific architecture

The highlights of the architecture are:

  1. ESXi hosts turn into Kubernetes worker nodes through ‘Spherelets’, a ESXi-native implementation of the Kubelet host agent. These Spherelets run alongside hostd on every ESXi host.
  2. The Supervisor Cluster is properly multi-master, with etcd co-located on three API server VMs. There’s a Supervisor Cluster per vSphere cluster; potentially multiple Supervisor Clusters per vCenter.
  3. The Supervisor is integrated with NSX-T (to create network segments for namespaces and Tier-1 routers and distributed firewalls for each Supervisor Cluster) and Cloud-native Storage (CNS) so developers can consume these resources on-demand and self-service.
  4. Native containers run on the CRX runtime.

Want more info on the architecture? There’s two videos in particular to watch:

Tech Field Day Extra at VMworld 2019
Tech Field Day Extra at VMworld 2019

Consuming vSphere VMs via Kubernetes

Think of it this way: how many applications actually use just containers? Not many organizations actually run their (legacy) databases in containers; why spend time and effort to end up with an identically monolithic database server, but in a container? Applications span physical and virtual machines, containers, SaaS, serverless, mainframes and other technologies.

Project Pacific is an effort to marry the current (vSphere virtual machines) with the new (containers) on the same platform to make consumption of resources easier. The key aspect of the deep integration is that in addition to the existing vSphere scheduler, adding the Kubernetes APIs makes consuming vSphere resources much easier, regardless of what kind of resources they are.

The project adds support for native vSphere virtual machine constructs to Kubernetes via less than a dozen custom spec CRDs. VMware has created the ‘VM Operator‘. This is a much cleaner implementation than other projects, like KubeVirt.

My guess is, this is aimed squarely at developers, who are much more likely to be familiar with the Kubernetes YAML spec. Effectively, being able to spin up VMs using YAML and the Kubernetes API replaces ticketing systems for developers. Yay!

Using Kubernetes API to create all resources

I think it’s great to be able to create and manage the lifecycle of virtual machines via the Kubernetes API and YAML. It makes it easy to manage their state via code (used in Infrastructure-as-Code tools like Terraform), re-use templates across teams and move virtual machines into the realm of ‘cattle, not pets’. Developers can continue to use their YAML-skills and not have to learn anything new, and no longer have to go through ticketing systems and Ops people to request new resources, just like in public cloud.

By using Kubernetes Operators, installing 3rd party commodity software becomes really easy. As IT, a library of these Operators can be installed for easy, app-store-like consumption of 3rd party software ‘as a service’ by developers, so they don’t have to install and configure their dependencies, databases and frameworks manually.

On-demand Kubernetes clusters for developers

Think of Project Pacific Guest Clusters as a managed Kubernetes service (similar to AKS, EKS or GKE(-on-prem)). But for free. And adjacent to most of the application’s data.

This functionality gives developers a cloud-like experience for on-demand, self-service Kubernetes. Available for consumption via the Supervisor Cluster’s Kubernetes API, creating Kubernetes clusters is done via the YAML spec, similar to how you’d create a container. This means that even Kubernetes clusters themselves can now be part of the GitOps / Infrastructure-as-Code lifecycle, adding more control and flexibility for developers. This ‘managing Kubernetes clusters via the Kubernetes API’ is not a VMware-specific project, but part of the Kubernetes SIGs projects: cluster-api.

These Guest Clusters are native, upstream Kubernetes. No modifications. But containers in a Guest Cluster don’t run natively on ESXi, they run in a VM running on ESXi. While this is great for compatibility, performance and security are not as good they can be. But some workloads require a specific Kubernetes version, privileged containers, or require specific integrations for networking like Calico.

Running containers natively on vSphere

This is where the other major part of Pacific comes in: running containers directly on ESXi. With the Kubernetes API now running on vSphere, running containers natively is the next logical step. I talked about CRX, the vSphere container runtime, here. The tl;dr of CRX that it’s a very lightweight paravirtualized Linux kernel running for all containers in a Pod.

It’s not unlike VMX, the virtual machine runtime, but CRX is much, much simpler. There are simply less variables to take into account, and VMware has full control over the kernel. This results in an insanely, highly tuned Linux kernel and ESXi runtime to run containers on.

While CRX is based on Photon OS, CRX is not a Linux distribution like Photon. CRX doesn’t have systemd, or user space. The kernel doesn’t boot in a traditional sense; CRX uses a technique called Direct Boot to jump directly into the main init process without going through kernel initialization, skipping ACPI initialization and power management.

All containers in a Pod run on the same CRX kernel runtime. This improves security and performance. This is almost like using unikernels without the pain of generating a unikernel for each container.

Namespaces tie all resources together

With application resources scattered across native containers, containers running in Guest clusters, traditional VMs, VM’s created as part of a Kubernetes YAML spec and serverless functions, how does a sysadmin even find all of these resources beloning to that one app? This is where the concept of namespaces come in, applied to vSphere. Instead of managing individual VMs, sysadmins manage logical applications.

Kubernetes Namespaces in vSphere
Kubernetes Namespaces in vSphere

This unit of management allows applying policies and settings at the logical application level instead of the individual resource level. Well-known vSphere settings for resource scheduling and availability (reserving resources, setting DRS, HA and SPBM) combine with native Kubernetes settings for access controls for this namespace.

Is Project Pacific Kubernetes ‘done right’?

There’s a couple of thoughts going ’round in my head. Let me break it up into two trains of thought.

The first one speaks to my technical side. I love how well VMware has integrated Kubernetes into vSphere:

  • The Supervisor cluster that makes Kubernetes a for-free feature-toggle.
  • Spherelets to transform ESXi hosts into Kubernetes worker nodes.
  • Making containers secure and performant first-class citizens in vSphere with CRX
  • Making VMs first-class citizens in Kubernetes with the VM Operator.
  • Guest Clusters consumption via Kubernetes cluster-API. Commodity software via Kubernetes Operators.
  • Using Namespaces to manage logical apps across custom containers and VMs as well as delivered as Operators.

All of this makes me giddy with anticipation. I want to play around with it in a lab. I want to discover how it works. Discuss the minutiae over pizza and beers.

But at the same time, I struggle to see a long-term use case. Project Pacific still requires organizations to manage their own infrastructure, plan for capacity, keep versions up-to-date, manage security and compliance, and more.

Ultimately, I fear Project Pacific is merely a prolongation of existing investments customers made in vSphere. It helps VMware stay relevant in the datacenter just a little longer.

But make no mistake, Project Pacific will be popular. The sell to C-suite and IT Ops as a business enabler is quick and easy, at least if the licensing is done right. This could make VMware a lot of money.

But it’s like celebrating an improved diesel car when EVs are taking over the world. How long will it still be relevant?