Categories
Posts in this category
- Automating Deployments: A New Year and a Plan
- Automating Deployments: Why bother?
- Automating Deployments: Simplistic Deployment with Git and Bash
- Automating Deployments: Building Debian Packages
- Automating Deployments: Debian Packaging for an Example Project
- Automating Deployments: Distributing Debian Packages with Aptly
- Automating Deployments: Installing Packages
- Automating Deployments: 3+ Environments
- Architecture of a Deployment System
- Introducing Go Continuous Delivery
- Technology for automating deployments: the agony of choice
- Automating Deployments: New Website, Community
- Continuous Delivery for Libraries?
- Managing State in a Continuous Delivery Pipeline
- Automating Deployments: Building in the Pipeline
- Automating Deployments: Version Recycling Considered Harmful
- Automating Deployments: Stage 2: Uploading
- Automating Deployments: Installation in the Pipeline
- Automating Deployments: Pipeline Templates in GoCD
- Automatically Deploying Specific Versions
- Story Time: Rollbacks Saved the Day
- Automated Deployments: Unit Testing
- Automating Deployments: Smoke Testing and Rolling Upgrades
- Automating Deployments and Configuration Management
- Ansible: A Primer
- Continuous Delivery and Security
- Continuous Delivery on your Laptop
- Moritz on Continuous Discussions (#c9d9)
- Git Flow vs. Continuous Delivery
Tue, 14 Jun 2016
Automated Deployments: Unit Testing
Permanent link
Automated testing is absolutely essential for automated deployments. When you automate deployments, you automatically do them more often than before, which means that manual testing becomes more effort, more annoying, and is usually skipped sooner or later.
So to maintain a high degree of confidence that a deployment won't break the application, automated tests are the way to go.
And yet, I've written twenty blog posts about automating deployments, and this is the first about testing. Why did I drag my feet like this?
For one, testing is hard to generalize. But more importantly, the example project used so far doesn't play well with my usual approach to testing.
Of course one can still test it, but it's not an idiomatic approach that scales to real applications.
The easy way out is to consider a second example project. This also provides a good excuse to test the GoCD configuration template, and explore another way to build Debian packages.
Meet python-matheval
python-matheval is a stupid little web service that accepts a tree of mathematical expressions encoded in JSON format, evaluates it, and returns the result in the response. And as the name implies, it's written in python. Python3, to be precise.
The actual evaluation logic is quite compact:
# file src/matheval/evaluator.py
from functools import reduce
import operator
ops = {
'+': operator.add,
'-': operator.add,
'*': operator.mul,
'/': operator.truediv,
}
def math_eval(tree):
if not isinstance(tree, list):
return tree
op = ops[tree.pop(0)]
return reduce(op, map(math_eval, tree))
Exposing it to the web isn't much effort either, using the Flask library:
# file src/matheval/frontend.py
#!/usr/bin/python3
from flask import Flask, request
from matheval.evaluator import math_eval
app = Flask(__name__)
@app.route('/', methods=['GET', 'POST'])
def index():
tree = request.get_json(force=True)
result = math_eval(tree);
return str(result) + "\n"
if __name__ == '__main__':
app.run(debug=True)
The rest of the code is part of the build system. As a python package, it
should have a setup.py
in the root directory
# file setup.py
!/usr/bin/env python
from setuptools import setup
setup(name='matheval',
version='1.0',
description='Evaluation of expression trees',
author='Moritz Lenz',
author_email='moritz.lenz@gmail.com',
url='https://deploybook.com/',
package_dir={'': 'src'},
requires=['flask', 'gunicorn'],
packages=['matheval']
)
Once a working setup script is in place, the tool dh-virtualenv can be used to create a Debian package containing the project itself and all of the python-level dependencies.
This creates rather large Debian packages (in this case, around 4 MB for less than a kilobyte of actual application code), but on the upside it allows several applications on the same machine that depend on different versions of the same python library. The simple usage of the resulting Debian packages makes it well worth in many use cases.
Using dh-virtualenv
is quite easy:
# file debian/rules
#!/usr/bin/make -f
export DH_VIRTUALENV_INSTALL_ROOT=/usr/share/python-custom
%:
dh $@ --with python-virtualenv --with systemd
override_dh_virtualenv:
dh_virtualenv --python=/usr/bin/python3
See the github repository for all the other boring details, like the systemd service files and the control file.
The integration into the GoCD pipeline is easy, using the previously developed configuration template:
<pipeline name="python-matheval" template="debian-base">
<params>
<param name="distribution">jessie</param>
<param name="package">python-matheval</param>
<param name="target">web</param>
</params>
<materials>
<git url="https://github.com/moritz/python-matheval.git" dest="python-matheval" materialName="python-matheval" />
<git url="https://github.com/moritz/deployment-utils.git" dest="deployment-utils" materialName="deployment-utils" />
</materials>
</pipeline>
Getting Started with Testing, Finally
It is good practise and a good idea to cover business logic with unit tests.
The way that evaluation logic is split into a separate function makes it easy to test said function in isolation. A typical way is to feed some example inputs into the function, and check that the return value is as expected.
# file test/test-evaluator.py
import unittest
from matheval.evaluator import math_eval
class EvaluatorTest(unittest.TestCase):
def _check(self, tree, expected):
self.assertEqual(math_eval(tree), expected)
def test_basic(self):
self._check(5, 5)
self._check(['+', 5], 5)
self._check(['+', 5, 7], 12)
self._check(['*', ['+', 5, 4], 2], 18)
if __name__ == '__main__':
unittest.main()
One can execute the test suite (here just one test file so far) with the
nosetests
command from the nose
python package:
$ nosetests
.
----------------------------------------------------------------------
Ran 1 test in 0.004s
OK
The python way of exposing the test suite is to implement
the test
command in setup.py
, which can be done with the line
test_suite='nose.collector',
in the setup()
call in setup.py
. And of course one needs to add nose
to
the list passed to the requires
argument.
With these measures in place, the debhelper and dh-virtualenv tooling takes care of executing the test suite as part of the Debian package build. If any of the tests fail, so does the build.
Running the test suite in this way is advantageous, because it runs the tests with exactly the same versions of all involved python libraries as end up in Debian package, and thus make up the runtime environment of the application. It is possible to achieve this through other means, but other approaches usually take much more work.
Conclusions
You should have enough unit tests to make you confident that the core logic of your application works correctly. It is a very easy and pragmatic solution to run the unit tests as part of the package build, ensuring that only "good" versions of your software are ever packaged and installed.
In future blog posts, other forms of testing will be explored.
I'm writing a book on automating deployments. If this topic interests you, please sign up for the Automating Deployments newsletter. It will keep you informed about automating and continuous deployments. It also helps me to gauge interest in this project, and your feedback can shape the course it takes.