We need to start somewhere, though, and there is perhaps no more important topic in automation than standardization.
In the Amazee Labs Global Maintenance (ALGM) team we provide long term support for Drupal sites. This includes things like monitoring, updating modules and core, applying security patches, etc. (https://www.amazeelabs.com/en/journal/web-maintenance-service-our-clients). Each ALGM team member will typically work on two or three sites during their day, and over the course of a month we may do work across several dozen different projects.
Modules and core - standardizing structure and versions
Let’s first consider the question of updating, patching, etc. The real bread and butter of maintenance. Here, effectively standardizing what is potentially invariant between sites is crucial.
One of the first major items of work that we do when a project comes into ALGM is to spend time getting the overall structure of the site looking like every other project we have in ALGM. We move themes and modules into standard directories, we make sure that the configuration details are stored where we expect them to be, and so on.
This does a couple of things - firstly, it reduces the cognitive overhead of working with the project. This is a major theme, and I’ll discuss it in more detail later, but if you just know where everything will be in any of the 30 projects you’re going to be working with during the course of a month, you’re going to be saving yourself a lot of time.
Secondly, we know that if we write a script for one of our sites (to, say, pull metrics about the files on disk) we know that anything we do for one site will automatically apply to any of our others, since their structure is normalized.
The other big task in onboarding is assessing the state of the project, what kinds of modules have been installed, are there any alpha modules? Are there any patches being applied to contrib modules or to core?
Bleeding edge modules and ad-hoc patches might be okay if you’re looking after one or two sites, but when you scale to any reasonable number, they can spell disaster.
Consider alpha modules, for example - they may provide precisely the functionality you’re after, but there are no guarantees that the module is going to provide a working upgrade path between versions. When it comes to upgrading the system, say, to patch a security vulnerability, you may not be able to automatically upgrade because of the lack of upgrade path.
This is, of course, your bad, since using experimental and early-stage developed modules are bad practice.
Considered from a maintenance perspective, choosing to use modules with no guaranteed upgrade path is a problem because for each of these modules, upgrading may require human intervention. And when you’re patching 80 sites to deal with, for example, a major security vulnerability, human intervention translates directly into the amount of time our sites will be down until they’re patched.
A similar argument applies to patches and forking modules (but these cases aren’t as clear cut).
The take-away here, though, is, try every other solution before using patched, forked, or unstable modules. It’s not just about getting the project over the line, it’s about making sure that it can thrive after it has been launched.
In ALGM we work on sites developed at all different times over the last decade. In that time we’ve seen massive changes to the way sites are built – everything from how dependencies are managed (the whole site checked into Git, submodules, composer), to how the frontend is built. Again, if you’re only working on a handful of sites, you may get away with a README to remind you how to get the site working locally, or how to compile your SASS. But for any reasonably large set of sites, this kind of ad hoc work becomes unmanageable.
Part of our standardization process addresses exactly this. Where there is variation in the build process, we try and abstract these differences away.
Say you have two sites, one whose frontend is built in SASS and Babel, the other built with LESS and uglify. From the maintenance developer working on updating the site’s modules, for instance, the difference between these two frontend builds is irrelevant. And yet, if she wanted to test the site locally, she’d need to build the frontend out to CSS and JS.
How we abstract these is by having a single command, across all sites, that builds the frontend. That way, the developer knows that to build any of our sites, she just has to run a command like “ahoy build-fe” and the front-end assets will be built.
Beyond the convenience of this, there’s another upshot. The code wrapped inside these commands is really the only documentation you need about your site’s front-end build. It’s often said about comments in code that they are lies waiting to happen. For the most part, this will also be true of any documentation you write about your build process. The code that actually builds the process can be trusted to not tell you any lies since it’ll never be out of date, and you know it works because, well, it actually does the job it says it does.
There are a few ways of doing this kind of wrapping. Bash scripts work, as does the trusty Makefile. My own preference is to use Ahoy (https://github.com/ahoy-cli/ahoy).
Let what’s unique shine bright
I remember reading a story about one of the modern Zen masters (for the life of me, I can’t recall who it was, possibly Seung Sahn) that always struck me as pretty insightful. A student came to the master to ask why, when they meditated, they all wore uniforms. Surely, the student asked, when we all wear these kinds of drab sitting robes, we lose our individuality? The master said that, in fact, the opposite is true - by standardizing all the irrelevant details, it is what is truly individual in the student that is allowed to shine through.
The same is true for maintenance. By normalizing, standardizing, flattening out all the irrelevant and invariant details of our sites, we can better see and attend to the differences that really make a difference to our sites.