Categories
Posts in this category
- Automating Deployments: A New Year and a Plan
- Automating Deployments: Why bother?
- Automating Deployments: Simplistic Deployment with Git and Bash
- Automating Deployments: Building Debian Packages
- Automating Deployments: Debian Packaging for an Example Project
- Automating Deployments: Distributing Debian Packages with Aptly
- Automating Deployments: Installing Packages
- Automating Deployments: 3+ Environments
- Architecture of a Deployment System
- Introducing Go Continuous Delivery
- Technology for automating deployments: the agony of choice
- Automating Deployments: New Website, Community
- Continuous Delivery for Libraries?
- Managing State in a Continuous Delivery Pipeline
- Automating Deployments: Building in the Pipeline
- Automating Deployments: Version Recycling Considered Harmful
- Automating Deployments: Stage 2: Uploading
- Automating Deployments: Installation in the Pipeline
- Automating Deployments: Pipeline Templates in GoCD
- Automatically Deploying Specific Versions
- Story Time: Rollbacks Saved the Day
- Automated Deployments: Unit Testing
- Automating Deployments: Smoke Testing and Rolling Upgrades
- Automating Deployments and Configuration Management
- Ansible: A Primer
- Continuous Delivery and Security
- Continuous Delivery on your Laptop
- Moritz on Continuous Discussions (#c9d9)
- Git Flow vs. Continuous Delivery
Sat, 30 Apr 2016
Automating Deployments: Stage 2: Uploading
Permanent link
Once you have the pipeline for building a package, it's time to distribute the freshly built package to the machines where it's going to be installed on.
I've previously explained the nuts and bolts of getting a Debian package into a repository managed by aptly so it's time to automate that.
Some Assumptions
We are going to need a separate repository for each environment we want to deploy to (or maybe group of environments; it might be OK and even desirable to share a repository between various testing environments that can be used in parallel, for example for security, performance and functional testing).
At some point in the future, when a new version of the operating system is
released, we'll also need to build packages for another major version, so for
example for Debian stretch
instead of jessie
. So it's best to plan for
that case. Based on these assumptions, the path to each repository will be
$HOME/aptly/$environment/$distribution
.
For the sake of simplicity, I'm going to assume a single host on which both testing and production repositories will be hosted on from separate directories. If you need those repos on separate servers, it's easy to reverse that decision (or make a different one in the first place).
To easy the transportation and management of the repository, a GoCD agent should be running on the repo server. It can copy the packages from the GoCD server's artifact repository with built-in commands.
Scripting the Repository Management
It would be possible to manually initialize each repository, and only automate the process of adding a package. But since it's not hard to do, taking the opposite route of creating automatically on the fly is more reliable. The next time you need a new environment or need to support a new distribution you will benefit from this decision.
So here is a small Perl program that, given an environment, distribution and a package file name, creates the aptly repo if it doesn't exist yet, writes the config file for the repo, and adds the package.
#!/usr/bin/perl
use strict;
use warnings;
use 5.014;
use JSON qw(encode_json);
use File::Path qw(mkpath);
use autodie;
unless ( @ARGV == 3) {
die "Usage: $0 <environment> <distribution> <.deb file>\n";
}
my ( $env, $distribution, $package ) = @ARGV;
my $base_path = "$ENV{HOME}/aptly";
my $repo_path = "$base_path/$env/$distribution";
my $config_file = "$base_path/$env-$distribution.conf";
my @aptly_cmd = ("aptly", "-config=$config_file");
init_config();
init_repo();
add_package();
sub init_config {
mkpath $base_path;
open my $CONF, '>:encoding(UTF-8)', $config_file;
say $CONF encode_json( {
rootDir => $repo_path,
architectures => [qw( i386 amd64 all )],
});
close $CONF;
}
sub init_repo {
return if -d "$repo_path/db";
mkpath $repo_path;
system @aptly_cmd, "repo", "create", "-distribution=$distribution", "myrepo";
system @aptly_cmd, "publish", "repo", "myrepo";
}
sub add_package {
system @aptly_cmd, "repo", "add", "myrepo", $package;
system @aptly_cmd, "publish", "update", $distribution;
}
As always, I've developed and tested this script interactively, and only started to plug it into the automated pipeline once I was confident that it did what I wanted.
And as all software, it's meant to be under version control, so it's now part of the deployment-utils git repo.
More Preparations: GPG Key
Before GoCD can upload the debian packages into a repository, the go agent
needs to have a GPG key that's not protected by a password. You can either log
into the go
system user account and create it there with gpg --gen-key
, or
copy an existing .gnupg
directory over to ~go
(don't forget to adjust the
ownership of the directory and the files in there).
Integrating the Upload into the Pipeline
The first stage of the pipeline builds the Debian package, and
records the resulting file as an artifact. The upload step needs to retrieve
this artifact with a fetchartifact
task. This is the config for the second
stage, to be inserted directly after the first one:
<stage name="upload-testing">
<jobs>
<job name="upload-testing">
<tasks>
<fetchartifact pipeline="" stage="build" job="build-deb" srcdir="package-info">
<runif status="passed" />
</fetchartifact>
<exec command="/bin/bash">
<arg>-c</arg>
<arg>deployment-utils/add-package testing jessie package-info_*.deb</arg>
</exec>
</tasks>
<resources>
<resource>aptly</resource>
</resources>
</job>
</jobs>
</stage>
Note that testing
here refers to the name of the environment (which you can
chose freely, as long as you are consistent), not the testing distribution of
the Debian project.
There is a aptly
resource, which you must assign to the agent running on the
repo server. If you want separate servers for testing and production
repositories, you'd come up with a more specific resource name here (for
example `aptly-testing^) and a separate one for the production repository.
Make the Repository Available through HTTP
To make the repository reachable from other servers, it needs to be exposed to the network. The most convenient way is over HTTP. Since only static files need to be served (and a directory index), pretty much any web server will do.
An example config for lighttpd:
dir-listing.encoding = "utf-8"
server.dir-listing = "enable"
alias.url = (
"/debian/testing/jessie/" => "/var/go/aptly/testing/jessie/public/",
"/debian/production/jessie/" => "/var/go/aptly/production/jessie/public/",
# more repos here
)
And for the Apache HTTP server, once you've configured a virtual host:
Options +Indexes
Alias /debian/testing/jessie/ /var/go/aptly/testing/jessie/public/
Alias /debian/production/jessie/ /var/go/aptly/production/jessie/public/
# more repos here
Achievement Unlocked: Automatic Build and Distribution
With theses steps done, there is automatic building and upload of packages in place. Since client machines can pull from that repository at will, we can tick off the distribution of packages to the client machines.
I'm writing a book on automating deployments. If this topic interests you, please sign up for the Automating Deployments newsletter. It will keep you informed about automating and continuous deployments. It also helps me to gauge interest in this project, and your feedback can shape the course it takes.