Categories
Posts in this category
- Automating Deployments: A New Year and a Plan
- Automating Deployments: Why bother?
- Automating Deployments: Simplistic Deployment with Git and Bash
- Automating Deployments: Building Debian Packages
- Automating Deployments: Debian Packaging for an Example Project
- Automating Deployments: Distributing Debian Packages with Aptly
- Automating Deployments: Installing Packages
- Automating Deployments: 3+ Environments
- Architecture of a Deployment System
- Introducing Go Continuous Delivery
- Technology for automating deployments: the agony of choice
- Automating Deployments: New Website, Community
- Continuous Delivery for Libraries?
- Managing State in a Continuous Delivery Pipeline
- Automating Deployments: Building in the Pipeline
- Automating Deployments: Version Recycling Considered Harmful
- Automating Deployments: Stage 2: Uploading
- Automating Deployments: Installation in the Pipeline
- Automating Deployments: Pipeline Templates in GoCD
- Automatically Deploying Specific Versions
- Story Time: Rollbacks Saved the Day
- Automated Deployments: Unit Testing
- Automating Deployments: Smoke Testing and Rolling Upgrades
- Automating Deployments and Configuration Management
- Ansible: A Primer
- Continuous Delivery and Security
- Continuous Delivery on your Laptop
- Moritz on Continuous Discussions (#c9d9)
- Git Flow vs. Continuous Delivery
Tue, 21 Jun 2016
Automating Deployments: Smoke Testing and Rolling Upgrades
Permanent link
In the last installment I talked about unit testing that covers the logic of your application. Unit testing is a good and efficient way to ensure the quality of the business logic, however unit tests tend to test components in isolation.
You should also check that several components work together well, which can be done with integration tests or smoke tests. The distinction between these two is a bit murky at times, but typically integration tests are still done somewhat in isolation, whereas smoke tests are run against an installed copy of the software in a complete environment, with all external services available.
A smoke test thus goes through the whole software stack. For a web application, that typically entails a web server, an application server, a database, and possibly integration points with other services such as single sign-on (SSO) or external data sources.
When to Smoke?
Smoke tests cover a lot of ground at once. A single test might require a working network, correctly configured firewall, web server, application server, database, and so on to work. This is an advantage, because it means that it can detect a big class of errors, but it is also a disadvantage, because it means the diagnostic capabilities are low. When it fails, you don't know which component is to blame, and have to investigate each failure anew.
Smoke tests are also much more expensive than unit tests; they tend to take more time to write, take longer to execute, and are more fragile in the face of configuration or data changes.
So typical advice is to have a low number of smoke tests, maybe one to 20, or maybe around one percent of the unit tests you have.
As an example, if you were to develop a flight search and recommendation engine for the web, your unit tests would cover different scenarios that the user might encounter, and that the engine produces the best possible suggestions. In smoke tests, you would just check that you can enter the starting point, destination and date of travel, and that you get a list of flight suggestions at all. If there is a membership area on that website, you would test that you cannot access it without credentials, and that you can access it after logging in. So, three smoke tests, give or take.
White Box Smoke Testing
The examples mentioned above are basically black-box smoke testing, in that they don't care about the internals of the application, and approach the application just like a user. This is very valuable, because ultimately you care about your user's experience.
But sometimes some aspects of the application aren't easy to smoke test, yet break often enough to warrant automated smoke tests. A practical solution is to offer some kind of self diagnosis, for example a web page where the application tests its own configuration for consistency, checks that all the necessary database tables exist, and that external services are reachable.
Then a single smoke test can call the status page, and throw an error whenever either the status page is not reachable, or reports an error. This is a white box smoke test.
Status pages for white box smoke tests can be reused in monitoring checks, but it is still a good idea to explicitly check it as part of the deployment process.
White box smoke testing should not replace black box smoke testing, but rather complement it.
An Example Smoke Test
The matheval application from the previous blog post offers a simple HTTP endpoint, so any HTTP client will do for smoke testing.
Using the curl
command line HTTP client, a possible request looks like this:
$ curl --silent -H "Accept: application/json" --data '["+", 37, 5]' -XPOST http://127.0.0.1:8800/
42
An easy way to check that the output matches expectations is by piping it through grep
:
$ curl --silent -H "Accept: application/json" --data '["+", 37, 5]' -XPOST http://127.0.0.1:8800/ | grep ^42$
42
The output is the same as before, but the exit status is non-zero if the output deviates from the expectation.
Integration the Smoke Testing Into the Pipeline
One could add a smoke test stage after each deployment stage (that is, one after the test deployment, one after the production deployment).
This setup would prevent a version of your application from reaching the production environment if it failed smoke tests in the testing environment. Since the smoke test is just a shell command that indicates failure with a non-zero exit status, adding it as a command in your deployment system should be trivial.
If you have just one instance of your application running, this is the best you can do. But if you have a farm of servers, and several instances of the application running behind some kind of load balancer, it is possible to smoke test each instance separately during an upgrade, and abort the upgrade if too many instances fail the smoke test.
All big, successful tech companies guard their production systems with such partial upgrades guarded by checks, or even more elaborate versions thereof.
A simple approach to such a rolling upgrade is to write an ansible playbook for the deployment of each package, and have it run the smoke tests for each machine before moving to the next:
# file smoke-tests/python-matheval
#!/bin/bash
curl --silent -H "Accept: application/json" --data '["+", 37, 5]' -XPOST http://$1:8800/ | grep ^42$
# file ansible/deploy-python-matheval.yml
---
- hosts: web
serial: 1
max_fail_percentage: 1
tasks:
- apt: update_cache=yes package=python-matheval={{package_version}} state=present force=yes
- local_action: command ../smoke-tests/python-matheval "{{ansible_host}}"
changed_when: False
As the smoke tests grow over time, it is not practical to cram them all into the ansible playbook, and doing that also limits reusability. Instead here they are in a separate file in the deployments utils repository. Another option would be to build a package from the smoke tests and install them on the machine that ansible runs on.
While it would be easy to execute the smoke tests command on the machine on which the service is installed, running it as a local action (that is, on the control host where the ansible playbook is started) also tests the network and firewall part, and thus more realistically mimics the actual usage scenario.
GoCD Configuration
To run the new deployment playbook from within the GoCD pipeline, change the testing deployment job in the template to:
<tasks>
<fetchartifact pipeline="" stage="build" job="build-deb" srcfile="version" />
<exec command="/bin/bash" workingdir="deployment-utils/ansible/">
<arg>-c</arg>
<arg>ansible-playbook --inventory-file=testing --extra-vars=package_version=$(< ../../version) #{deploy_playbook}</arg>
</exec>
</tasks>
And the same for production, except that it uses the production
inventory
file. This change to the template also changes the parameters that need to be
defined in the pipeline definition. In the python-matheval example it becomes
<params>
<param name="distribution">jessie</param>
<param name="package">python-matheval</param>
<param name="deploy_playbook">deploy-python-matheval.yml</param>
</params>
Since there are two pipelines that share the same template, the second pipeline
(for package package-info
) also needs a deployment playbook. It is very
similar to the one for python-matheval
, it just lacks the smoke test for now.
Conclusion
Writing a small amount of smoke tests is very beneficial for the stability of your applications.
Rolling updates with integrated smoke tests for each system involved are pretty easy to do with ansible, and can be integrated into the GoCD pipeline with little effort. They mitigate the damage of deploying a bad version or a bad configuration by limiting it to one system, or a small number of systems in a bigger cluster.
With this addition, the deployment pipeline is likely to be as least as robust as most manual deployment processes, but much less effort, easier to scale to more packages, and gives more insight about the timeline of deployments and installed versions.
I'm writing a book on automating deployments. If this topic interests you, please sign up for the Automating Deployments newsletter. It will keep you informed about automating and continuous deployments. It also helps me to gauge interest in this project, and your feedback can shape the course it takes.