Continuous Integration and Delivery

An important part of the software process is the final step: delivery.

If you can't get your software into your customer's hands, then what's the point of writing it at all?


There are several at-times cross-cutting goals. In descending order of importance, they are:

  • Improve reliability and quality of releases
  • Improve efficiency of the release process
  • Improve testing feedback loop
  • Improve efficiency of the development process


There are several aspects to continuous integration and delivery:

  • Build: create testable artifacts
  • Test: execute automated tests on a clean machine, separate from any developer's environment
  • Package: create deployable artifacts (may be same as build output)
  • Deploy: deploy artifacts to target environments (e.g. Dev, Staging, UAT or Production)


As expected, working in an organized manner with increased automation has clear benefits.

  • An excellent protocol of which software version contains which changes
  • Automated and centralized archiving of versions
  • Improved code & configuration quality as all code is tested on a non-developer machine
  • Practice makes perfect: if you're delivering during the entire life of the project, then the final delivery is much more predictable and far less stressful


There are obviously limitations as well. The most immediate one is infrastructure investment: you have to set up build servers or purchase them in the cloud. You also have to make your process work with automated builds and possibly retrain personnel to work with it.

You have to plan your project and you have to have patience on the part of all stakeholders. You have to train everyone on the team to not even consider releasing a version of the software from a developer PC.

Setup and maintenance of build agents takes time and effort, especially over longer periods of time. Operating systems are upgraded, core components changed, build systems upgraded. All of these things will cause the build to fail on a given agent, even though nothing is actually wrong with the product. Here again, though, the agent will act as a canary in the coalmine for your development team. More often than not, the build-server failure will alert the team to avoid a feature that would have other wise cost them time to integrate before it's ready.

Deployment types

The type of deployment depends on the product.

For desktop software, you need to build an installer or a compressed archive that users can execute and install. Mobile or UWP applications must be built and then delivered to app stores for installation. Web servers and sites can be deployed directly to in-house servers or into the cloud (e.g. AWS or Azure).

These deployment types are for the end users, but there are many more releases than that. Developers need to test their changes locally. Testers need to get these versions in order to provide feedback in a timely manner. We think of all of these releases as part of the build infrastructure, not just the continuous-integration server delivering an end-product.


  • Clean, predictable versioning: preferably semantic versioning, which means that you can support alpha, beta, RC and other pre-release versions
  • Scripted packaging: there can be no manual steps in the entire release process
  • Infrastructure: One or more agents for executing builds, either hosted locally or in the cloud
  • Knowhow: Knowledge of how to configure builds and deployments, preferably distributed among multiple team members (even if access is limited to IT)


At Encodo, we have experience with various systems for various types of software. We started off using Jenkins but moved to JetBrains TeamCity several years ago. Web projects have their own packaging and testing mechanisms (e.g. WebPack, Mocha) that integrate into almost any build infrastructure. We've also used Fastlane combined with Test Flight for mobile deployment. Our main expertise lies with configuration of .NET deployments paired with TeamCity.


  • Use the same tools on the build server as your developers do. That is, if you use .NET with R# and the NUnit test runner, then use those same tools on your build server. In this case, TeamCity is a good fit for many of our projects.
  • Avoid writing too many custom scripts for the build server. The build server will need to perform some extra tasks (like clearing databases), but make sure that those scripts are in the code repository and can be executed and tested locally as well. This decreases debugging time in the CI environment.
  • If you do have to write scripts for the build server, consider whether you can use the same scripts on local developer machines. For example, Encodo uses a lot of NAnt scripts to clean, build, deploy and package solutions. We use those scripts locally as well on the build server. This increases the likelihood that an issue with the scripts will be detected locally rather than only on the build server (where it's generally more difficult and time-consuming to address).
updated on 12/15/2017