Use Version Aware Build Scripts

I am thinking of writing a small booklet about better builds. this blog post might end up as a part of it. Let me know if you’re interested by taking a look here.

While I do love to use Continuous Integration Servers such as teamcity, Jenkins and more, I try to use them as much as possible as “dumb triggers” to my build actions. They do one thing and one thing only, and that is to trigger the correct build script with the correct resources (source version, artifacts form other builds, etc), and nothing more. 

I don’t add any custom build actions such as compilation, source checking etc, because I believe that the build actions themselves have a very strong connection with the correct version of the source.

For example, at an earlier version of the source you might need to compile a specific set of projects, and in future source versions, maybe new projects were added, some were removed, or renamed etc.  If you have your build scripts as part of your source control, and versioned along with the source, they will always match the current source version.

So what do those processes trigger? A script that is inside source control, tied directly to the current project`s source committed version. It might be an xml file (ugh), or a rake file, or a finalbuilder script file (my favorite).

Then it becomes easier to revert to an older build version to recreate it, or to test differences between versions, or to build multiple different branches of the same applications in parallel for different clients.

See if you can take your current CI build, and have it run on a version of the source from 6 months ago. How many changes will that require of you to make it work? If your build is tied to source version, things should feel much simpler.

Single Responsibility

To me it is also about the single responsibility principle.  A Continuous Integration server should do one thing well – manage builds, their resources, and their scheduling.  It should not also need to worry about what is inside each build. In uncle bob’s newspaper metaphor, the builds would be like different sub headings. Each does one thing, and the caller of those functions would only call them, without caring what they do.


Thinking about it the other way around, if you have your whole build script running inside one large set of teamcity custom actions, all managed in the browser, and then you refactored that script to separate script files that are part of the source code, you’d be closer to what I am trying to do.


So when I see companies boasting about how many custom tools their CI server can do, and how you can manage it all in the browser easily, I remind myself that the more I use those custom build features without the context of a current source version they are attached to, the more work I will have to do to change them every time I need to go back and forth between versions.

 Version Aware Artifact Paths

Following that line of thinking, I might argue that version aware build script “cubes” in source control should also be internally-deciding on what artifacts the encompassing CI process that is running them will be exposed to, and allowed to share with other builds. Today such things are managed solely at the CI tool level, but it would be good if the tools enabled artifact paths to be tied to the current version of the source.

Avoid XML Facing Build Tools

Having your build scripts in XML is one of the worst things you can do for maintainability and readability.

Treat your build scripts like you treat your source code (or, how developers should treat their source code) – with respect for readability and maintainability, not just for performance.

To that end, XML is a bad candidate to have. If you have any more than 15 or 20 lines of xml in a build document:

  • It is hard to tell where things start, and what they depend on
  • It is hard to debug the build
  • It is hard to add or change things in the build logic
  • XML case sensitivity sucks
  • Creating, Adding and using  custom actions is a chore

To avoid XML, you can start using some of the many build tools out there that are either visual, or support a domain specific language for builds that is more readable, maintainable or debuggable than XML:

  • Rake – Ruby Based DSL (Domain Specific Language) for builds, that is very robust and quite readable
  • FinalBuilder and VisualBuild are two visual build scripting tools that give great readability into your build script

Start Products with an Empty Build and Deploy Pipeline

Builds can be nasty beasts when It’s finally time to organize and do them.

There are so many things you did not think about, that it can take days or weeks to come up with a build for an existing system. To avoid the nastiness, start with an empty build, on the first day of your project. Then grow it gradually.

Before you start your first feature on a product that you know needs to exist for a long time, start with creating the pipeline for deploying it.


  • Create an empty source project. Something that is a stand-in for the real project you are about to develop. If it is a web application, it could be a single page with “hello world” on it. This project will be what you throw through your empty pipeline to test it.
  • Create a simple CI build script that will live with the code. For now it might only compile your empty project. Make sure it is relative to the code location so you can run it anywhere.
  • Create a simple “Deploy to production” build script that lives with your code. For now this will only copy your project or put some file somewhere on a production server.
  • Oh, you should have a production server to deploy to, so you can deploy your product. that is step #1  of the first iteration!
  • Create a new project in your CI system (I use teamcity, but go ahead and use Jenkins or anything else you desire. ) In that project create a CI build configuration that triggers the CI build script, and a Deploy to Prod Configuration that triggers the Deployment build script you wrote.
  • Connect the CI server to your source code repository, and make it trigger the CI on commit.
  • Make the CI artifacts available to the Deployment configuration.
  • Make the Deployment Configuration trigger automatically on a successful CI build.
  • Run the whole thing and see a “hello world” in production

Now you are ready to nit pick:

  • You might want to create a “deploy to test” configuration and script that triggers instead of production.
  • You might want to do the same with a staging environment

Now you have an empty build and deploy pipeline.

You can start writing real code, and as real code gets created, you only need to start doing small modifications to your build scripts , or CI process variables to make things work nice and smoothly.

Now the build grows and flourishes alongside the source code, instead of as an afterthought. and it is remarkably easier to handle because the changes are much smaller.

Rolling Builds and the Plane of Confidence

When I check in code I have been working on, I feel:

1.    I want to wait for the build result
2.    I don’t want to waste time waiting for the build results, doing nothing
3.    But I fear doing something in the code until I’m sure I didn’t break anything

Supposedly, this is where a CI build configuration comes into place. CI builds are supposed to be as fast as possible so we can get feedback about what we checked in as quickly as possible and get back to work.
But the tradeoff is that CI builds also do fewer things so that they can become faster,
Thus leaving you with a sense of some risk, even if the build passed.

This is why I like to have “Rolling Builds”. I like to have builds trigger each other in the Continuous Integration Server:

•    A Check-in triggers the CI build
•    A successful CI build triggers a nightly build.
•    A successful nightly build triggers a deploy to test build.

I think of the builds now as single waves crashing on my shoreline. Each build is a slightly bigger wave. Each wave crashing on the shore brings with it a layer of confidence in the code. And because they happen serially, I can choose to just relax and watch the waves of increasing confidence crash on the shore, or I can choose to continue coding right after the first wave of confidence.

As I program, the next waves of build results hit the shoreline, and my notification tray tells me what that wave brought with it. Another “green” result wave tells me to go on about my business. A “red” wave tells me to stop and see what happened.

But I always wait for at least the first wave to come ashore and tell me what’s going on. I need that little piece of information that tell me “seems legit so far” so I can feel 50@5 good about going back to coding. If my changes were big, I might wait until the next wave to see what to do.

So my confidence after check in is not a black or white result. It is a continuous plane of increasing confidence on the code that I am writing, that is topped off when the code is deployed to production.


Note: This text is part of a “Beautiful Builds” Booklet I am working on.

How to make your builds FAST using DRY and SRP Principles

(this will become part of my upcoming booklet on beautiful builds)

One of the things that kept frustrating me when I was working on various types of build configurations in our CI server, was that each build in a rolling wave would take longer than the one before it.

The CI build was the fastest, the nightly build was slower, because it did all the work of CI (compile, run tests) plus all the other work a nightly needed (run slow tests, create installer etc..). The Deployment to test build would maybe just do the work of the CI and then deploy, or, in other projects, would do the work of nightly, then deploy, because wouldn’t you want to deploy something only after it has been tested?

And lastly, production deploy builds took the longest, because they had to be “just perfect”. so they ran everything before doing anything.

That sucked, because at the time, I did not discover one of the most important features that my current CI tool had : the ability to share artifacts between builds. Artifacts are the output of the build actions. these could be binary compiled files, or they could be configuration files, or maybe just a log file, or just the source files that were used in the build , from source control.


Once I realized that build configurations can “publish” artifacts when they finish successfully, and that other build configurations can then “use: those artifacts in their own work, things started falling into place.

I no longer needed to re-do all the things in all the builds. Instead, I can use the DRY (Don’t repeat yourself) principle in my build scripts (remember that build scripts are kept in source control, and are simply executed by a CI build configuration that provides them with context, such as environment parameters, or the artifacts from a previous build).

Instead, I can make each rolling wave build, only do one small thing (single responsibility principle), that gets added on top of the artifacts shared by the build before it.

For example:

  • The CI Build Only gets the latest version from source control, compiles in debug mode, and runs the fastest tests. Then it published the source and compiled binaries for other builds to use later on. Takes the same amount of time as the previously mentioned CI build.
  • The Nightly build gets the artifacts from the latest successful CI Build, and only: compiles in RELEASE mode, runs the slow tests, creates installers, and publishes the installer, the source, and the binaries for later builds. Notice how it does not even need to get anything from source control, since those files are already part of the artifacts (depending on the amount of source this might not be a good idea to publish source artifacts due to slowness, but that depends on the size of the project). takes half the time of the previously mentioned nightly build.
  • The Deploy (to test) build, gets the installer from the nightly build that last passed successfully,  and deploys it to a machine somewhere. It does not compile, or run any tests. It publishes the Installer it used. takes 30 seconds.
  • The Deploy (to Staging) will just get the latest successful installer build artifacts from (deploy to test) builds, and deploy them to staging, and also publish the installer it used. Takes 30 seconds.
  • The Deploy (to Production) will just get the latest successful installer build artifacts from (deploy to staging) builds, and deploy them to production, and also publish the installer it used. Takes 30 seconds.

Notice how with artifact reuse, I am able to reverse the time trend with builds. The more “advanced” a build is along the deployment pipeline, the faster it can become.

And Because each build in the deploy pipeline is only getting artifacts of successful builds, we can be sure that if we got all the way to this stage, then all needed steps have been taken to be able to arrive at this situation (we ran all the tests, compiled all the source…)

Pattern: Script Injection

I am slowly realizing that perhaps my concepts for the build book could be better serving as reusable patterns for creating builds, solving specific problems.

This is a test to see how well this idea holds up in reality. If you like it, and especially if you do NOT like it, please let me know in the comments why, and how you would change it.

I think this is a topic that can be of great use to many people. If we frame it right, it can be more easily distributed and understood.

I am not even sure if the sections below are what I would “need” for a pattern of sorts.

Pattern: Script Injection


You have a continuous integration process running nicely, but sometimes you need to be able to build older versions of your product. Unfortunately, in older versions of your product in source control, the structure of files is different than the one that your build actions are set up to use. For example, a set of directories that exists in the latest version in source control, and is used in deployment, does not exist in the source control version from 3 months ago. So your build fails because it expects certain files or directories to be there, which did not exist in that source control version.


You want the set of automated actions in your build to match exactly the current version and structure of your product files in source control.


Separate your build script actions into two parts:

  • The CI side script, which lives in the continuous integration system.
  • The source control side script.


Source control side script

one or more script files that are inside the file structure of your product in source control. These scripts change based on the current product version. It is important that this is the same branch that is used within the CI system to build the product, so that the CI side scripts have access to the source control side scripts.

Developers should have full access to the source control side scripts.

Source Control side scripts contain the knowledge about the current structure of the files, and which actions are relevant for the current product version. So they get to be changed with every product version. They will usually use relative path because they will be executed by CI side scripts on a remote build server.

CI Side Scripts/Actions

These actions act as very simple “dumb” agents. They get the latest version of the source control scripts (and possibly all other product files if needed), and trigger the build scripts as a command line action.



By separating the version aware knowledge to a script inside source control, and triggering it via a CI side script that contains only parameters and other “context” data to invoke the source control scripts correctly, we “inject” the version aware knowledge of what the build script should do, into a higher level CI process trigger, that does not care about product file structure, but still gives us the advantages with had before with a CI process.


Thoughts? Comments? Does this make sense to you? is it the stupidest thing ever?

Does the fact that I now wear glasses help in any way?

Pattern: Shipping Skeleton

I am slowly realizing that perhaps my concepts for the build book could be better serving as reusable patterns for creating builds, solving specific problems.


Pattern: Shipping Skeleton

other names: “Hello World”, Walking Skeleton (based on XP), Tracer Bullet Build,


Remember when you had a working product, but you could not ship it, or shipping and deploying it took a long time, or could not be estimated?. By that time, however, your product was too big and your free time was too short to start creating a working automated build and deploy process. As time went on, shipping became even more and more of a nightmarish manual task, and automating it became more and more of a problem.

Now you're at the start of a new project, and you want to avoid all that pain in your new project.


You want to avoid the pain of automating the build and deploy cycle. But you also don’t want to spend too much time working on it.


Before starting development on the new product, start with a shipping skeleton -- An empty solution with nothing but a “hello world” output, that has the basic automated build and deploy cycle working. You should be able to ship this empty hello world project with the click of a button, in a matter of minutes.

once the shipping skeleton is in place, you can start filling out the product with features, and growing the build scripts in small increments alongside the product.

Basic Shipping Skeleton:

1) A Build script (in source control) for compiling the current source

2) A Continuous Integration Server that has a “CI” build configuration, triggered by code check-in, that invokes build script from the previous bullet (also see Build Script injection for more on this pattern)

3) Another build configuration on the CI server for “Deploy”. This can be either deploy to test, or deploy to production. Usually you want at least a “deploy to test” succeeding before having a “deploy to production” CI build Configuration. This “Deploy to test” gets invoked automatically upon “CI” configuration passing. Later on, as the build process matures, you can change this, but for now, knowing that the product is deployable as soon as possible, while it changes so much, is important for quick feedback cycles.

4) If there isn’t a “Deploy to production”, create it. This deploys the product to a production machine. If it is a web site, it deploys the website to a web server. if it is an installer, it deploys and runs the installer on a machine that will act as a “production” machine. either mimicking a user machine, or or an enterprise machine where the product will be installed. for web servers, make them as real as possible, all the way, even by making them public (although possibly password protected). This gives you the chance to make the web server become real production by the switch of a dns setting.



By starting with a shipping skeleton, you give yourself several benefits:

  • You can ship at will
  • Adding features to the build and deploy cycle is a continuous action that takes a few minutes each day, if at all.
  • You can receive very quick feedback on features or mockup-features you are building into your product.

Thoughts? Comments? Does this make sense to you? is it the stupidest thing ever?

Build Pattern: Location Agnostic Script


  • Running the build script on a machine where it was never run before requires extra pre-work to make the machine ready, such as mapping drives, creating a special build script folder etc.


  • The build script uses hardcoded file and directory paths to find its dependent source files. For example, it searches for a solution file in Z:/SolutionFileName . That means the build script has specific requirements from its environment before running, which causes lots of menial, boring work to do before being able to run it.


  • Have the script use only relative paths to find files.
  • Make the script part of source control, or deploy it into the root of the source control branch to build, so that it has access to all the files it needs

Release Compilation

Imagine you could only compile once a week.

Once every two weeks. Once a month. Scary?

All that code that you write blindly, not knowing whether it even compiles. All those hidden errors lurking, accumulating in the dark. Then, on that special day of compilation, you get to finally find out if you made it. And of course, you most likely did not. Unfortunately, that special compilation day is followed by an even worse day. Demo day.

You are supposed to sit down with your customer, or product owner, and show them all the progress you have made to your application. Most times, compilation day goes sour, and you are stuck fixing last minute, last hour, and last night-Compilation issues only to find out that now, the application isn’t working. So there goes demo day.

You secretly start wondering if there was a way to ‘break the rules’. A way that you could secretly compile your code — maybe on a daily basis?— and see if you left something out. Hell, why not do it every time you check in your code? or even save it? What if you could _continuously_ compile your code? Your life would be so much easier. Not to mention your customer’s life. You could find those compilation errors quickly, and thus fix them quickly while they are still fresh in memory. You could start focusing on that demo now that the code compiles, and see if the app is actually working!

Aren’t we lucky that we all already live with that kind of continuous compilation? At least in static languages, we can see if our code ‘checks out’ by the compiler almost instantly. We can write a single line, or even a single code keyword, and see if it works when the reality of the compiler hits. If we are working in a dynamically typed language, like Ruby or Python, we write tests, and we run those tests continuously , so that we get feedback that the code works. Well, some of us do, anyway. A lot of us just _say_ we do it, but deep down we know it is lip service.

Sometimes we get a ‘fluke’ - we write a function and it fully compiles and runs with no problems with no compilation and debugging errors. That kind of magic doesn’t happen very often though.

Things aren’t perfect. Now that we live in this magical world where compilation is continuous, we start to realize we still have problems with our customer’s demos.

The demos aren’t working. Well, they are working, but not really. they break and they work only partially. And when we try to fix the functionality problems, we break other functionality. We need testing, or, more precisely, _automated_ testing to tell us if our code functions right.

Why does it need to be automated though? Because manual testing takes so much time. It’s really a good waste of money to repeat the same regression tests by a human. We should be using humans for tasks with creativity built in, like exploration testing, but we need ‘automated checks’ for regression testing, to keep telling us we didn’t break anything, continuously.

So we write some tests, and we run them locally every time we are about to check in the code. things feel better for a while, but then we realize the customer demos are still not really working. “Why would the customer tests not be working _now_?” you ask yourself. Turns out, the demos are supposed to be shown on the customer’s machine. The customer’s machine is very different from a developer’s machine. It has weird firewall requirements, it needs active directory permissions to run your application, and the database is shared on a different machine. For you, demo day is also, it turns out, deployment day.

Because you deploy before you demo (usually the day before if you are lucky), it means you have one day every week you deploy. Or maybe demo is once every two or three weeks, so you deploy just once every two or three weeks. If you’re lucky.

Deployment once a week? Now you spend your week compiling and testing code, but you’re still as blind as a bat. You do not know whether all this code will actually work within the constraints of the customer’s machine. Sure, it works on _your machine_, but we all know that doesn’t mean much.You spend all this time writing and polishing your beautiful code, all the while you have no idea whether it will _fit in_ with the hard reality of living on a customer’s machine.

Demo day comes, and you deploy, and your perfectly working code is shamingly crashing, lagging, stalling and basically acting like an angry child in the middle of a supermarket: Badly. Your demo sucks. And now you can’t get any feedback from the customer. Instead, you have wasted your customer’s time, and some of their hard earned trust.

This keeps happening.

Some weeks you get things right, some weeks the new pieces of code just don’t fit the configuration. You have to wait a week between demos to find out if the changes you made to your code or configuration files actually work, because, well, you deploy once a week.

You secretly start wondering what happened if you broke the rules and got your own little machine that looks and acts just like a customer’s machine. If you had that kind of machine locally, you could try and deploy to it on a daily basis, and find out if your code actually works after deployment. If it didn’t you could fix it way before demo day. And then you can use the demos to actually get feedback on the functionality of your application. On second thought, what if you went all out and _automated_ the deployment to the fake customer machine? then you could test your deployment continuously! On every check in.

Now that would be quite beautiful.

Deployment testing is like a compilation step for your release cycle. It verifies that your product, when deployed, can face the harsh realities of a production machine. It should happen continuously, so that you do not find out at the last minute that your code or configuration, is not able to handle the current reality.

For deployment testing, you need to create a _staging_ environment. This environment mimics the production environment as closely as possible, and it should not be contaminated with compilers, editors or development tools.

How do you test your deployment with a staging environment? You run acceptance tests ,and hopefully also all the other automated tests you’ve written, against the application deployed on staging. For example, you could run browser tests (using selenium or capybara — google those) against a web application installed on staging. What happens if you _do_ need to debug something that only seems to happen in the staging environment? That’s why you have a ‘test’ environment. think of ‘test’ as ‘staging + debugging tools’. It can be ‘dirty’ in that you can use it to examine things more closely, but in an environment that can simulate real world difficulties.

The environment right before ‘test’ is ‘dev-test’. that can either be a developer local machine, or a continuous integration build agent, that usually just compiles and runs automated tests.

To me, a beautiful build, is one that encompasses all the levels of confidence we just discussed: Compilation, automated testing, automated deployment, and deployment testing.

If possible,I take the next logical step - I also deploy to a production machine in an automated manner - the same code that has passed all the other stages.

I, and many others, call this chain ‘Continuous Delivery’. (will be part of my builds book)