At DockerCon this week, I sat in on a Tech Field Day Extra (TFDx) event and we talked about how to move existing applications into the Docker ecosystem.
As a CTO for a company that does a lot of legacy enterprise stuff and End-User Support, I deal with legacy applications on a daily basis. To put this into context: these are, in 99% of the cases, commodity applications provided by a commercial ISV in an installer or virtual appliance format. It’s rarely something developed in-house.
This still is the reality for so, so many corporations out there, and migrating those applications to a container platform is the major inhibitor for moving from a VM-based IaaS platform to a container-based CaaS platform.
Docker, in the past four years, was aiming for the developer crowd: making the software development cycle easier, more consistent and removing hurdles for developers to move into staging and production. This meant Docker was aimed squarely at software development where refactoring an application from a monolithic architecture to a micro-service, distributed architecture is possible. It did not focus on the more traditional ISV-approach of delivering software into the datacenter.
The biggest hurdle to take for organisations with a more traditional set of applications is the software supply chain. In this blog post, I want to dive into what Docker is doing, how it’s doing that differently than, say, VMware, and what problems it solves.
What’s the problem?
I won’t dive into the software development process and associated methods and tools (like source version control with Git, CI/CD with Jenkins, deployment using TerraForm and config management using Chef). I’m going to ignore developers altogether. Bold statement right, for a blog post about Docker? I know.
Instead, I want to look at the software supply chain cycle that enterprises face daily: how to know about, obtain, test and deploy a new version of a software product a vendor has released.
Traditional software release pipeline
If you look at a traditional release pipeline for any of those applications, the problem becomes apparent:
- The ISV does its magic trick, building, testing and shipping a new release. The process is not transparent: the vendor is a black box in this scope. You, as the client, just see the end result.
- The client is notified (usually, an e-mail via the ISV support portal). The client starts investigating the new release in terms of functionality, security and quality. This is called a ‘change’ in ITSM/ITIL shops.
- If the new release is relevant, a change request is pushed up to the Change Advisory Board, which assesses the upgrade in terms of business (economical) risk, compliance, critical path and other bottlenecks.
- If accepted, the change gets scheduled in a maintenance window and deployed into production after moving through the internal release pipeline.
Lack of integration
The major issue here is the lack of integration of the software supply chain: the ISV releasing a new version isn’t integrated technically or organizationally at all with the customers using the software. Each client needs to build some kind of release pipeline, and the process varies per ISV, software product and business line where it’s deployed. This is a costly endeavour, introduces all kinds of technical debt, is characteristically full of (technical and organisational) risk and cost.
Now, with the rise of virtualization, we saw some attempted fixes to this problem, such as the VMware Virtual Appliance Marketplace (Solutions Exchange), the OVA/OVF file format and Virtual Appliance builders like TurnKey. But these are all workarounds, not fixing the issue at hand: standardising the release pipeline and building software to handle it automatically.
Docker removes friction in the software supply chain
Docker is actively working to fix this problem by introducing technology that fixes this problem from a couple of different perspectives:
1. Docker Registry
With a Docker Registry (either public Docker or 3rd party registries like Docker Hub or Quay.io as well as private registries), software suppliers now have a simple, standardised, commonly available and commonly used method to deliver software into their client based, from development, testing and acceptance, all the way into production: ISVs publish to a registry, customers deploy from a registry.
This is fundamentally different from deploying a new software version into a virtual machine, where the methods differ wildly (download a new OVA, have a software update mechanism inside the app, use a application binary and upgrade script, etc.) and the reliability of those methods is, well, you know. It requires organisations to invest in change management and release control. In other words: the release cycle (notification, download) and update cycle (updating your application) aren’t integrate.
With Docker, the upgrade process is taken from the each individual application and put in the supply chain (i.e. plumbing) layer for every app and the release and update processes are integrated within a single pipeline. This means the organisation now has insight into new releases, has control over the supply chain and is able to deploy a new release from dev/test into production from a single view.
To put this into context: compare this to the mess of updating Windows apps versus using the Chocolatey package manager; or having an app outside of the Mac App Store versus an app living inside of it. Having a single supply chain for software makes it so much easier to be notified, download and install the app update. This is wat Docker is doing with its Registry and Store.
With a single supply chain, there is now a well-defined technology and accompanying process that both the supplier and client can use to deliver and deploy new software releases. This does nothing for standardising application (component) design, yet, though. This is where Project Moby comes in, and I’ll talk about my first impression in a next blog post.
2. Docker Store
In addition to public or private registries, Docker has created the Docker Store, a marketplace where commercial software can be bought as Docker containers. This Store really is a credit-card front-end to a specific Docker Registry to handle payment and day-1 deployment. While I don’t think the Store will be hugely successful due to how organisations will continue to procure applications, I think this was a missing cornerstone of VMware’s strategy back in the day, and a fundamental element of allowing Docker to enable an end-to-end software supply chain.
I especially like the coupling of the Store for day 1 deployment and the Registries for day 2 and beyond. Talk about integration of the software supply chain!
3. Content Trust
Securing the software supply chain is becoming a major factor in the speed at which enterprise are able to deploy new application or application versions, and Docker has a great take on how to solve this issue.
With the Internet as the preferred medium to deliver new application releases, trust becomes an issue. Integrity and publisher verification is important to actually be able to trust the new release being pushed into staging and production. Content Trust (or more specifically, the Notary project) in a Docker registry gives you the ability to verify both the integrity and the publisher of all the data received from a registry. Comparing this to the world of Virtual Machines and Enterprise IT (which are still running vSphere by-and-large), there just is no comparable mechanism. There is manual MD5/SHA checksumming, but nothing integrated or automated.
Comparing to the old infrastructure days, there’s just no comparison: VMware’s solution was so much more clunky, had more variation and was implicitly less secure.
4. Docker Security Scanning
Docker takes the security of the supply chain one step further though, by not securing just the supply chain. In Docker Datacenter, you can scan any container for known security vulnerabilities based on the CVE database with Docker Security Scanning.
This is a critical part of running code in production. As a recovering VMware architect, I can remember the days of not knowing how secure my virtual appliances were. Even worse: knowing they’re not secure because the packaging of the application in a vendor-controlled virtual appliance didn’t allow me to update the insecure component. Being able to break up an application stack into different layers, like Docker does, solves the packaging issue. Having Security Scanning pro-actively scan for vulnerabilities makes it simple, accessible and visible to Ops teams. What a difference this is compared to the VMware world!
With Docker focussing on creating a single software supply chain, they’re removing major friction in the enterprise today. With registries, the Docker Store, Content Trust and Content Scanning, Docker is working towards a unified and secure software supply chain.
This is radical new approach of deploying software into the enterprise and a major advantage for System Operations teams. Wait, Docker will help you to run your infrastructure more smoothly? Yes! Awesome!
Ending on a small side-note, there’s one other issue hindering adoption: migrating existing (legacy) applications. I’ll dive into this issue and Dockers proposed solution, MTA, in my next post, as it’s quintessential for major adoption in the enterprise.