Chris Hanaoka (Tech Director)
Ian Childress (Tech Lead)
Polyverse was a cybersecurity startup that approached security in a novel way. The vast majority of weaknesses in the Linux kernel are buffer overflows. These allow attackers to insert their own code into running programs, so they can control the targeted computer.
This does require the attacker to know what code is running in a given location in memory in order to avoid the computer simply crashing, and approaches like Address Space Layout Randomization, which randomize the location of modules in memory, do help to an extent. Polyverse's polymorphing went a step further however, by scrambling the entire module, making it almost impossible for an attacker to know what was going on on any given machine.
When I was hired, my first task was to create a series of runbooks, or guides for completing common tasks, for each Linux distro supported by Polyverse. The issue being that a number of people had written documentation for a number of tasks, but they weren't organized in any way, were distributed as a bunch of random PDFs, weren't written with a consistent voice, and nobody knew if they'd been tested or when. Additionally, management wanted to update the documentation to support a few new distros, so somebody would need to figure out how to accomplish things on those distros and document the steps.
The first step was to figure out just what the product was, how it worked, and what it's use cases were.
I interviewed Chris Hanaoka, the tech director, and asked for a rundown on how the product worked. Most Linux software is distributed through a package manager, which solves dependencies and handles installations. Package managers pull from a series of repositories, which are just big collections of binaries. Polymorphing just inserts itself at the top of the list of binaries, so the machine pulls from Polyverse's repositories first.
Next, I had to figure out the target audience, and what they already knew.
I talked to the sales team to see what they could tell me. It turned out customers were going have a working knowledge of Linux. Their concern would mostly be mirroring Polyverse's repositories, meaning making their own copies, and setting up their own in airgapped environments, meaning environments which didn't have network connections, to avoid attacks over the network.
Lastly, I had to take a look at what had already been written, meaning I had to hunt down existing documentation, and talk to coworkers to ask who'd been writing what. I wanted to figure out what I could reuse, and what would need adaptation.
The first thing I usually do when setting fingers to keyboard is to write out an outline of the document, so I can organize my thoughts. I used this as an opportunity to list out the user tasks that needed to be documented, and their status. Each task was either undocumented, documented, edited, or tested, with tested being the final step. Each task would have a status for each distro, giving me a matrix of user tasks and a plan for completing each one.
Next, for every undocumented task, I had to figure out how to perform it on each supported distro, on Docker containers, VMs and bare metal installations.
In order to understand any documentation, you have to understand all the base concepts used in that documentation. One of the things I needed to do was to write new intros and prerequisite sections, to make sure the readers started out at the same base level at least. This included walkthroughs of how to find user IDs on the company website, and required software.
Lastly, I had to make copies for each supported distro. Most of the text for the documentation was consistent between distros, but the actual commands were not. This just meant I needed to have a primary version of the documentation that I could make copies of. Then I'd just insert the appropriate commands into each version. This meant that the source of truth was always the primary copy, and any edits would happen there, with changes cascading down to distro-specific versions afterwards.
Originally, all documentation was distributed in PDF format. This is not the best for SEO or usability. After completing all the runbooks, I pushed to get them all moved onto the company website to enhance SEO, usability and editability. My background as a web developer meant that I could do the development work myself, meaning it wouldn't take time away from the regular web developer.
The website was created using NextJS, which uses React, and meant I could abstract repeated blocks of text into their own components. You'd provide code to the components, and they'd format them, and put the code blocks in the right place. I wrote the code blocks themselves as their own components. Each one came with a copy button that would copy the contents into the user's clipboard, as is standard now.
Repeated code, such as the installation commands, were stored as constants. This way, if the engineering team needed to change those commands, updating the documentation could be done easily, in one place. This happened several times, and keeping a single source of truth made everyone's lives much easier.
I wrote sub navigation for the documentation. The pages for each distro had subsections, and users would likely want to move between subsections easily. In the sub nav, the section you were on automatically highlighted, so you'd always know where in the document you were.
Lastly, I made sure everything worked responsively, meaning it had to be readable and usable on phones, desktops, or any device the reader happened to be using. This meant moving the sub navigation, and changing it to a dropdown.
Ongoing maintenance became much faster. Edits and changes to the public documentation could be done in a few hours, and since customers weren't downloading PDFs, they could be assured they'd always have the latest version. The documentation became one of the most used sections of the site.
If you'd like to see the documentation, Email me, and I'll be happy to provide a copy.