Another case for full releases

Last week, I was in the middle of a failed release that needed to be rolled-back.  After rolling-back the code on the live production server, we found that there were issues.  I was not told of a release a few weeks earlier where the developer placed a single module on the server without installing an entire release.

So on the next release last week, when the deploy blew away the developer's changes, the only way to get back the developer's changes was on the peer server.  With a single server environment, this would not have been possible.

Yet again, this shows that partial releases have no place in live server environments.

A branching poem

Wrote this for an branching thought:
The time has come
To branch on some,
And so it will
Be the head is still.
Wanted to send this out to developers just before I branch for a release.

Full or partial releases

Working in the web app world, I frequently get asked, often by managers who aren't thinking about the product and SOX compliance, if it is OK to plop just a jsp file or a jar instead of rebuilding the entire war each time.

My answer is always "No!". There are a number of reasons why, but most simply is that doing so defeats the basic tenets of release engineering: reproducability, traceability, accountability.

As a simple example that recently occurred in my own job, we were starting to upgrade a pair of server that had not been touched in nearly three years. We started in on the first of the pair however we soon found that the code needed to be reverted. When we redeployed the backup of the war file, the code was exhibiting incorrect behavior, especially compared to its twin. Turns out that soon after the deployment three years earlier, a developer also deployed a single JSP file into the exploded war directory. Three years go by, developer has moved on, managers have forgotten everything, two different release engineers moved on. The developer was playing around with a production system, didn't document what happened and didn't re-release the product to correct the situation. As a result, the product could have been in jeopardy if we had not been more tentative and worked on only one server at a time (another topic of discussion: a lot of developers want me to deploy to all the servers at once). By only working on one, we could compare the old war with the changed war on its twin.

This anecdote illustrates one major issue with partial deployments. There was no accountability for the product: none of the manager, developer or the release engineer at the time of the first release held themselves accountable. There was not tracability: no documentation was kept, no build records, no labeling - just a file plopped down into a directory. No reproducability: the environment could not be reproduced without looking at the twin server. If that twin production server hadn't existed, then there would have been nothing to revert to.

Full deployments from a single, release distribution build lead to all of these tenets. Anything less breaks release engineering practices.

Significant builds in continuous integration

With agile methodologies, it is very common to have continuous integration builds. In those methodolog, there are two types of builds: successful and unsuccessful. Unsuccessful builds are builds where either the code could not compile, unit tests fail or regression tests (if part of the build system) fail. Successful builds are ones where the build completes successfully.

Considering how often continuous integration builds are executed, with cascading, downstream builds of other products, even large disks could fill up quickly. Many limit the number of builds preserved or the number of days preserved. Here I suggest a different approach.

There are two types of successful builds. A significant successful build is one where a change occur to initiate this build or an upstream (assumed to be successful) build. An insignificant successful build is one where no change has occurred to initiate this build.

Insignificant builds could be deleted on the next successful build. Significant builds should be kept for some period determined by the SCM.

An example of an insignificant build within the system is builds activated hourly or when changes are detected in version control. It is easily conceivable, especially overnight, that the hourly builds would become insignificant compared to the version-control based builds.

When what would the purpose of having insignificant builds? There may be other tasks outside of the build that are triggered by the build, for example, regression tests on a remote system, load tests run at night. Each of these could be triggered by a specific build, but generating the same build objects (for example, publishing them to the same object repository).

Significant and insignificant successful builds have their place within the build framework. Identification of each and more importantly, getting the build systems to identify each, and managing these would become the next stage.

How not to release

I recently worked for a client with J2EE applications who had issues with my suggestions on how release engineering should be handled. So I will leave this as a lesson for those of you that might read this little blog.

First, they would copy the software from one environment to the next instead of installing the software from distribution files. This led to a large number of conflicts within the base configurations (usually with the multicast addresses and ports which would be forgotten).

Next, they insisted on having tweaked startup programs that were never committed to a revision control system, replaced the ones that would have been in revision control and that the "installation" system needed to work around (i.e., not overwrite). Instead of the operations team working with development to have a set of startup programs that worked across the board and having those committed to revision control, there are multiple sets of install/start/stop programs that may or not be correctly maintained and most assuredly are not properly tracked and auditable.

Lastly, the operations team insists on configuring their "large number" of J2EE server systems by using GUI Jconsole-like applications using mbeans. This is laboreous, time consuming, error prone and unweildy. Most sites that I have worked at will generate configuration files and have them pushed out as text to the multiple servers, modified for each specific server and environment. These files can be recreated at any time and if properly managed through version control, can even be rolled back to any point in time. Managing a large number of servers through a GUI is an effort in futility.

Distribution program

I've recently created a distribution program for myself and others to use within the company. The program name is, unimaginatively, called "distrib". It reads XML data files from a server and determines the entry from the username of the application account (see previous post).

Names of distribution files, server hosts and pathnames are captured in one of the aforementioned XML file. The distribution files are copied to the distribution directory on the remote server ('server:~/vrel#'); files that have already been copied are checked with md5 checksums. The product's install program is also implicitly copied.

After the distribution files have been copied, the program calls the product's install program on each remote server.

There are options to manage the program flow and to access the XML data files.

The program is simple, quick and efficient.

A lot of the past work has been laid as foundation to be able to write a program such as this. Individual application accounts, a +90% common code install template system for 20+ products, more than 25 products working on the same Generalized Release Process.

Application accounts

For each of the products that are installed and deployed within my company, there is an application account to go with it. The account will have a shared home directory, /home/username, and a per-host application installation directory, /apps/username.

The application installation directory is managed by the product's install program. The only contents should be from the installation and from the application. Users should not "move" files aside, or copy files to that directory. This is the production installation directory, not a work-area.

The home directory is to house builds, distributions, patch files, other "cruft", depending on the host. Release builds are stored under the "releng" directory by release number, for example "~/releng/1.2". QA builds are stored under the "qa" directory in the same fashion, or as the QA engineer chooses. Distributions are stored in the home directory in sub-directories starting with "v" and the release number, e.g. "~/v1.2". The install program, always found in "sbin" in the source repository, is copied reach release (since there may be changes for that release) to the distribution directory (on the destination server).

Generally builds are performed in the engineering environment and no build tools are available in the production environment. Because of co-location, the engineering environments and QA environments share home directories and the production environment is isolated. Many of the products are installed and deployed on multiple hosts, within each environment. Therefore, it is important to have a shared resource (the home directory) in each environment to be able to install distributions or patches from.

Release numbers are just symbolic names, they could be anything - conventionally they are dotted numeral sets that match tags in the source repository (e.g. "1.4.6.1"). But a release number could be "1.2-test0012" for a QA release. This may correspond to the "1.2" source tag.