Continuous Integration, Deployment (or Delivery), and even Release are well-known concepts for developers and cloud engineers. Teams work hard to create pipelines that do most of the repetitive work for them, reducing the cognitive load and reducing toil. The pipelines are a second brain of sorts that remembers to do every little check, test, or task at exactly the right moment, automatically integrating, testing, deploying and observing our software for us. These continuous improvement patterns and workflow automation are key aspects of any high-performing software and cloud engineering environment. While these steps are different for every pipeline, in general, they fall into one of the three test, build, and deploy buckets; each performing a series of automated steps and tasks to get code to production. But, as the title reveals: what about documentation? Why do we sweep documentation under the rug? Why do we not have a separate Continuous Documentation bucket?

In this post, I’ll make the case for Continuous Documentation and why I think we need it.

What is Continuous Documentation, anyway?

First things first, let’s define Continuous Documentation.

Continuous Documentation is a behavioral and organizational pattern that closes the loop between changes in a code base and its documentation by using automation (documentation-as-code) and standardized workflows and pipelines.

In other words: it’s the practice of updating documentation in lockstep with updating the codebase; effectively coupling documentation to the codebase.

Because, surprisingly, documentation is hardly ever coupled to its codebase, at all. It’s an open loop. Closing the loop creates a positive feedback loop and continuous improvement. Open loops are downward spirals of continuous deterioration.

Documentation and similar artifacts like onboarding docs, design docs, architectural overviews are often uncoupled, unsynced, and kept in completely different repositories and tools (like Confluence, Wikis, or (shudder) Sharepoint or Teams. No wonder it’s always out-of-date, incomplete, and incorrect.

Note: I purposefully left the when the loop closes out of the definition because I think how long the loop is varies from organization to organization. How long the loop between a change in a codebase and the accompanying change in the documentation is, depends on the maturity of the teams working with this pattern: some will enforce strict commit-level consistency, while others may choose a more eventual consistency model of closing the loop every sprint. It also depends on the technical capabilities and limitations of the development environments, the size of the codebase they’re working with, and the amount of technical debt present.

Why is uncoupled documentation bad?

You may wonder, what’s the problem here? Documenting is a core agile practice, right? Alas, it is often (nee, always) repressed by the constant pressure of delivering business value to stakeholders quickly.

And by not coupling documentation to its respective codebase, open loops are left unclosed. These create future work (i.e. work is kicked down the road, but never really done), documentation debt, blind spots in documentation coverage, and outdated, incomplete, and incorrect documentation.

That means that any business process that is dependent on documentation suffers. These processes range from onboarding new developers or developers switching projects or teams, but it can very well also impact your end-users and customers if the documentation is customer-facing. This hurts productivity (remember: opportunity cost kills agility) but also skyrockets customer or user support load, tanking customer satisfaction.

And uncoupled documentation has other negative effects, too: it plays a hand in creating tribal, undocumented knowledge, which is hard to transfer between people and teams, and creates noise among the knowledge signal. This, eventually, leads to unhealthy engineering culture, cultivating hero complex issues. This is where individuals are stuck in a cycle of being the hero with all the knowledge, able to save the day when issues arise: they have the knowledge (and remember, knowledge is power) to solve outages and other issues. The point is: they get stuck in this negative cycle, and transferring knowledge is not in their best interest. Read the excellent Phoenix Project book if you want to know more. For now, we can simply conclude that uncoupled documentation leads to tribal knowledge, and tribal knowledge is bad, hmmkay?

Lastly, it leads to technical debt, too. By not documenting architectural diagrams, design specifications, it becomes easier to forget the Boyscout Rule, which nudges engineers to leave the codebase cleaner than they found it. But without clear documentation, it is quickly forgotten, and technical debt and other cruft start to build up quietly. This will hurt security, refactorability, agility, and other qualitative aspects.

Continuous Culture need tools

Continuous Documentation, like all continuous improvement paradigms (looking at you, DevOps), is more about engineering culture than it is about tools. It’s a mindset where engineers and their managers actively look to close the feedback loop, updating documentation in sync with their work on the actual code-base, and doing so in standardized, automated pipelines where possible.

That, of course, means using tools!

There are many examples of continuous documentation tools that can help in the automated pipeline department. is a pipeline and IDE tool that auto-syncs documentation where possible. Its algorithm identifies and auto-corrects coupled documentation, reducing drift between the codebase and any kind of documentation when the source code changes. Swimm builds and updates documentation incrementally, keeping it in sync with the code base, corrects inconsistencies and inaccuracies automatically and creates visibility with documentation coverage issues for a given code base. 

Continuous Documentation with Swimm
Swimm couples documentation to code and auto-syncs changes. Source:

It has a couple of runtime modes: IDE-integration and pipeline runs.

Their IDE integration automatically checks which documentation needs to be updated as the developer works on a piece of code, surfacing any manual changes to the developer to immediately tackle. This is an example of a strict commit-level consistency model, forcing the developer to work on the code plus documentation as an integral package. Their pipeline automation takes an approach more akin how tests are run in CI pipelines, ‘testing’ if and how documentation should be updated.

In any case, developers contribute to the documentation coverage and quality as they work on the code base, preventing documentation debt. Swimm also scans codebases, so they can assign work packages in sprints to improve coverage and blind spots and reduce tribal knowledge.

Improving the Developer Experience

And imagine if your company builds multiple applications, but they all deliver a different documentation experience to its end users due to inconsistencies of processes, practices and pipelines. That leads to a disjointed and inconsistent experience. This is what VMware, as a builder of many software products, is suffering from. Different teams, from different acquisitions or historical backgrounds deliver very different experiences to the users of their software. In their recent presentation at Cloud Field Day 10, Dave Shanley went into how they’re building better developer experience with a radical consolidation effort internally, helped by a few tools he developed to improve API discoverability and improve documentation coverage and quality.

They’re building a new public developer portal with API documentation and an emphasis on consistency, completeness, and test coverage of thousands of APIs across hundreds of VMware products. In the video, Dave tells the story of how his tools are enabling development teams for all of these products.

VMware’s Thermometer and Rolodex. Source:

Thermometer, is a leaderboard, giving insights into coverage and quality of every API. Rolodex is an index of every API across the company. The Printing Press automatically generates documentation from the OpenAPI-compatible APIs, and also looks at API quality and coverage, as seen in this screenshot:

VMware’s Printing Press. Source:

Like Swimm, VMware’s DX tools are a massive improvement on the Continuous Documentation front. They give VMware’s developers insights on API and documentation quality and coverage, so they know where to improve, and what to improve. These improvements are then fed into the Developer Portal for public consumption.

Joep’s Stance

I think Continuous Documentation as a way-of-work is a vital piece missing from our Agile, Scrum and DevOps methodologies. We need it to create complete, correct and up-to-date documentation for both internal and public consumption; and tools like Swimm’s or VMware’s are instrumental to reduce a developer’s cognitive load and reduce the amount of menial work they need to do to keep documentation up-to-date. I hope we’ll see more solid solutions like these hit the market, and see integrations into popular pipeline and IDE environments to make creating and updating documentation a trivial and automated ordeal. Ask me in a year whether my hope was justified.