Posts in this category
- Automating Deployments: A New Year and a Plan
- Automating Deployments: Why bother?
- Automating Deployments: Simplistic Deployment with Git and Bash
- Automating Deployments: Building Debian Packages
- Automating Deployments: Debian Packaging for an Example Project
- Automating Deployments: Distributing Debian Packages with Aptly
- Automating Deployments: Installing Packages
- Automating Deployments: 3+ Environments
- Architecture of a Deployment System
- Introducing Go Continuous Delivery
- Technology for automating deployments: the agony of choice
- Automating Deployments: New Website, Community
- Continuous Delivery for Libraries?
- Managing State in a Continuous Delivery Pipeline
- Automating Deployments: Building in the Pipeline
- Automating Deployments: Version Recycling Considered Harmful
- Automating Deployments: Stage 2: Uploading
- Automating Deployments: Installation in the Pipeline
- Automating Deployments: Pipeline Templates in GoCD
- Automatically Deploying Specific Versions
- Story Time: Rollbacks Saved the Day
- Automated Deployments: Unit Testing
- Automating Deployments: Smoke Testing and Rolling Upgrades
- Automating Deployments and Configuration Management
- Ansible: A Primer
- Continuous Delivery and Security
- Continuous Delivery on your Laptop
Sun, 21 Feb 2016
Technology for automating deployments: the agony of choice
As an interlude I'd like to look at alternative technology stacks that you could use in your deployment project. I'm using to a certain stack because it makes sense in the context of the bigger environment.
This is a mostly Debian-based infrastructure with its own operations team, and software written in various (mostly dynamic) programming languages.
If your organization writes only Java code (or code in programming languages that are based on the JVM), and your operations folks are used to that, it might be a good idea to ship .jar files instead. Then you need a tool that can deploy them, and a repository to store them.
I'm a big fan of operating system packages, for three reasons: The operators are famiilar with them, they are language agnostic, and configuration management software typically supports them out of the box.
If you develop applications in several different programming languages, say perl, python and ruby, it doesn't make sense to build a deployment pipeline around three different software stacks and educate everybody involved about how to use and debug each of the language-specific package managers. It is much more economical to have the maintainer of each application build a system package, and then use one toolchain to deploy that.
That doesn't necessarily imply building a system package for each upstream package. Fat-packaging is a valid way to avoid an explosion of packaging tasks, and also of avoiding clashes when dependencies on conflicting version of the same package exists. dh-virtualenv works well for python software and all its python dependencies into a single Debian package; only the python interpreter itself needs to be installed on the target machine.
If you need to deploy to multiple operating system families and want to build only one package, nix is an interesting approach, with the additional benefit of allowing parallel installation of several versions of the same package. That can be useful for running two versions in parallel, and only switching over to the new one for good when you're convinced that there are no regressions.
The choice of package format dictates the repository format. Debian packages are stored in a different structure than Pypi packages, for example. For each repository format there is tooling available to help you create and update the repository.
Pulp is a rather general and scalable repository management software that was originally written for RPM packages, but now also supports Debian packages, Python (pypi) packages and more. Compared to the other solutions mentioned so far (which are just command line programs you run when you need something, and file system as storage), it comes with some administrative overhead, because there's at least a MongoDB database and a RabbitMQ message broker required to run it. But when you need such a solution, it's worth it.
A smaller repository management for Python is
pip2pi. In its simplest form you just
copy a few .tar.gz files into a directory and run
dir2pi . in that
directory, and make it accessible through a web server.
Installing a package and its dependencies often looks easy on the surface,
apt-get update && apt-get install $package. But that is
deceptive, because many installers are interactive by nature, or require
special flags to force installation of an older version, or other potential
Ansible provides modules for installing .deb packages, python modules, perl, RPM through yum, nix packages and many others. It also requires little up-front configuration on the destination system and is very friendly for beginners, but still offers enough power for more complex deployment tasks. It can also handle configuration management.
An alternative is Rex, with which I have no practical experience.
Not all configuration management systems are good fits for managing deployments. For example Puppet doesn't seem to have a good way to provide an order for package upgrades ("first update the backend on servers bck01 and bck02, and then frontend on www01, and the rest of the backend servers").
I'm writing a book on automating deployments. If this topic interests you, please sign up for the Automating Deployments newsletter. It will keep you informed about automating and continuous deployments. It also helps me to gauge interest in this project, and your feedback can shape the course it takes.