My first Platform9 post gave an overview of the Managed OpenStack service, its use cases, the benefits and an architectural overview. In this post, I will dive into the architecture deeper. But let’s go back to the architectural overview just to refresh your minds:

Screen Shot 2016-02-18 at 21.41.31

And the OpenStack architecture:

Screen Shot 2016-02-18 at 21.46.05

SaaS-components

I haven’t learned much about the SaaS-part of their solution though this really is the part that makes the offering work. At the same time, this is the secret Platform9 sauce, so I imagine they won’t share details easily. Looking at the architecture image above, there’s one distinct non-OpenStack component: Clarity UI. This Platform9-developed HTML5 UI is the real secret sauce, because it presumably contains all the Platform9-specifics. Here you can create/destroy VMs, control it (power on, off, KVM console) and manage VM settings (storage, network, compute settings).

What I do know is that they’ve worked out a viable configuration of an OpenStack flavor, and they’re deploying this controller configuration (I’m assuming a small number of VM’s per customer) to their own infrastructure. They don’t surface any of the back-end stuff; really the only thing the customer gets is access to the instance portal and OpenStack APIs.

As far as back-end operations go, they backup the metadata each night and replicate it to a secondary datacenter. They use these data points to do the upgrades of the platform, too. What they do is instantiate a new instance based on the newly released version, and import the backed up metadata and do a final sync for data consistency during the maintenance window.

There are a couple of networking and security options to toggle, but you’d have to contact support to enable these. I’ll talk more about this in a sec.

Multi-hypervisor support

Platform9 currently supports two hypervisors: KVM and vSphere. For vSphere, you need to set up your own hypervisors, storage and networking (i.e. import a fully functioning environment into the OpenStack instance). For KVM, the possibilities are a little broader. After provisioning the physical hosts with a supported Linux distribution, Platform9 can automatically provision the hypervisor, storage, image and networking services on top. This means a physical host can be a hypervisor (using Nova), a storage node (using Cinder) or a network node (using Neutron). For the Neutron network node, it is recommended to assign to no other roles to the host. This host will act as an orchestration agent for the cluster to create bridges across all other hosts and will provider network services (like routing).

On-prem: gateway and Glance

Each datacenter or vCenter instance is coupled to a stateless virtual ‘gateway’ (VM for vSpere, Linux per-host agent for KVM) that attaches your current infrastructure (compute clusters, networks, storage and templates and images) to the control plane.

In addition to the gateway functionality, the virtual appliance also offers the Glance image service for vSphere-environments. This is actually the only OpenStack service that runs on-prem. This makes sense though, since Glance is in the data path for templates and ISOs. For all services, workloads and data stay on-prem. Only the metadata gets synced.

The Gateway needs certain privileges to access vSphere resources. The knowledge base article Prerequisites: Platform9 OpenStack for VMware vSphere describes these. These can be set on a fairly granular level: the article talks about the ‘datacenter’ object in vSphere, but I suspect it works for individual clusters, too.

Loosely coupled metadata

The gateway bi-directionally syncs its metadata. In many other CMPs, changing the environment outside of the portal creates issues with the platform state and creates metadata inconsistencies, i.e. the CMP doesn’t ‘see’ the outside changes. Platform9 doesn’t have these issues, which makes the platform really suitable for ‘hybrid’ approaches: customers work with the portal, admins can go outside. This increases operational flexibility. Any changes made outside of the portal will get synced into the portal periodically, making the metadata transient.

Of course, not all VMs running on the infrastructure have to be part of the Platform9 instance; they can also run without their ownership. You can keep VMs out of Platform9 by placing them on a datastore not managed OpenStack.

A great example of this flexibility is creating and managing templates and other base images from outside the OpenStack view, and they are imported automatically. Having worked with various (VMware-based) Cloud Management Platforms, being able to work outside of the CMP and have changed synced into the CMP automatically is brilliant.

Although it seems like a small detail, it makes Platform9’s architecture so much more adaptive and easier to use: this sync helps the CMP stay consistent with what’s really going on on the infrastructure layer.

Storage

There are currently four storage options available with OpenStack – ephemeral, block (Cinder), file (Manila) and object (Swift) storage. Here’s a tutorial on the different options: OpenStack Storage Tutorial: Storage Options and Use Cases.

cinder

There are a couple of types of block storage that can be used with Platform9. The most common type, in KVM deployments, is using Linux LVM. This is a broadly supported and fairly easy option, but has severe scalability and redundancy issues. This is likely a feasible option for smaller deployments only.

A more enterprise-grade option for vSphere is using the storage stack offered by ESXi, consisting of an enterprise, Software-defined or hyperconverged array. This option mostly bypasses the Cinder services, which means data services and metadata aren’t surfaced, which prevents the admin from creating volumes, snapshots etc from the Platform9 interface. For that reason, Platform9 has Cinder integration with a number of enterprise storage array, specifically NetApp and SolidFire. Platform9’s Cinder implementation supports volume type metadata to surface up storage array services, such as compression and QoS.

It also integrates with vSphere’s Storage Policy Based Management. More on that here: SPBM Support for Cinder in Platform9 VMware and Storage Policy-Based Management Support for Cinder in Platform9 VMware.

Networking & Security

Platform9 supports various network topologies including overlay/underlay (VXLAN) networks, flat networks and VLAN topologies. The infra admins creates the underlay networks while the tenant admin creates tenant-wide networks (for access to shared resources like the internet). Tenant users can then create their own specific application networks. By default, Platform9 has support for routing, NAT, firewall functionality. Load Balancer support is currently API-only, and other network services (VPN, for instance) are not supported yet.

neutron

Also, be aware that there’s no functional integration of NSX-v or other 3rd party network virtualization solutions other than the constructs Platform9 uses. NSX-v support via Neutron is in beta, and I expect some of the simpler NSX-v functionality to be surfaced into the Platform9 portal somewhere this year. In the VMware vSphere Support: What Works and What’s Upcoming article, Platform9 states:

Neutron and NSX support – You will have the ability to integrate your new or existing software-defined networking stack (NSX, or others) with Platform9, and leverage advance functionality such as:

  • dynamic creation of private networks
  • creation of routers to route traffic across different private networks
  • using tunneling protocols such as STT or GRE for traffic routing
  • integration with advance functionality such as Load Balancer as a Service (LBaaS) and Firewall as a Service

For now, Platform9 uses the Neutron service with ovs functionality. You can read up on how to use Neutron in a Platform9 environment here: OpenStack Tutorial: Networking with Neutron – Basic Concepts. This article explains the Provider, External and Tenant Network concepts pretty well. Or read up on Tutorial: Setting up OpenStack Neutron for Linux-KVM.

From a security perspective, things look pretty thought-out. They support 2-factor authentication and SAML, and portal and API access can be locked down on their side using VPN, dedicated physical connectivity or firewall policies (specific source subnet access).

Multi-region support

multi-region2 Platform9 recently announced multi-region support. This means that multiple customer datacenters can be used with a Platform9 tenant. The tenant users can then select the most suitable datacenter based on distance to the end-users or based on dependencies to other application services for data locality.

Multi-region

The tenant admin can segment based on physical boundaries (i.e. a datacenter, a datacenter room or a rack) or logical boundaries (cluster or hypervisor type, network or storage boundaries, vCenter instance). The tenant admin can even assign resource usage policies on these regions in a granular way. Read more on this feature here: Multi-Region Support from Platform9.

There are more granular options to segment the underlying infrastructure, too: OpenStack Tutorial: Set Up Resource Tiers Using Host Aggregates.

Ecosystem Support

One of the great advantages of coupling vSphere with OpenStack is VMware’s ecosystem support, with great solutions for backup (Veeam), network virtualization (NSX), monitoring (Log Insight, Operations Manager), Automation (packer, vagrant are integrated into OpenStack) and more. These solutions continue to work unchanged for the infrastructure admins, and because of the 2-way sync, you can use native tooling for each solution for management. Not all of the 3rd party functionality is exposed into the cloud management platform of course, but that’s really no different as compared to, say, vRealize Automation or vCloud Director.

Billing

There’s no mention of billing (or chargeback) anywhere. However, OpenStack contains the Cellometer component for billing/chargeback. My assumption is that Platform9 supports this component via APIs, but doesn’t surface them into the Clarity UI. I couldn’t find any info on this on their support portal, unfortunately.

Concluding

Although Platform9 does ‘OpenStack’, they actually mean ‘OpenStack for VMware-environments’. There’s significant work done on the Neutron and Cinder services to make OpenStack more suitable for VMware shops, although it’s pretty obvious what their roots are: they started out supporting KVM, not vSphere. I hope to see more integration between Platform9 and vSphere in the future, and more features being surfaced. And I’d like to see the ecosystem products (like VSAN, NSX, Veeam, etc.) make their way into Platform9, too.

One of the few things I’d wish to have more insights into, is what really makes up their managed services. In other words: I want to know what gets deployed on their side for each tenant. I’ll see if I can learn more about the SaaS part in the next couple of months.

After a few days investigating the solution, working with it in my lab and examining their support portal, I can’t find a lot of holes in their architecture, though. Bravo!