Tag: python


wsgi-liveserver

Following on from my last post, I have now split the LiveServerTestCase out into its own Python package to make it easier to reuse in other projects. I have called it wsgi-liveserver and it is the first Python package that I have released. The package can be downloaded from PyPI. The code can be found on GitHub and a welcome any feedback.

Testing Bottle Applications with Selenium

Selenium is a really nice framework for testing web application front-ends by automating actions through a web browser, but it also requires a web server to be running so that the browser can interact with the web application. Most other tests usually interact with the code directly, so this requirement can also lead to a slight problem... how should the web server be started when running tests?

The simplest way to run a Selenium test is to manually start up a web server for your application and then run the tests against it, but this can get a bit tedious after a while (especially if you keep forgetting to start the server).

Django provides a LiveServerTestCase which automates starting up a web server to serve up your Django application, run your Selenium tests, and then stop the server again. This is a really nice approach, but I wanted to be able to do something similar when I am not using Django.

Last week I came across the flask-testing framework which provides similar functionality for Flask applications. The flask-testing LiveServerTestCase is inspired by the Django version, but is much simpler. Unfortunately it is also a bit specific to Flask applications.

What I really wanted was a something that could be used for any WSGI compliant web application. So I wrote my own which is loosely based on the flask-testing version. You simply inherit from the LiveServerTestCase class instead of from unittest.TestCase when creating your test class, override the create_app() method to return your WSGI application and, write your tests as normal. When you run your tests it will handle starting and stopping the web server in the background as required. I have written a very basic example Bottle application called bottle-selenium to show it in action.

I originally wrote this to use with Bottle applications, mainly because they are very simple to work with. My eventual goal is to use this for testing the development of Roundup instances, so it should work with any WSGI compliant web application.

Update (22/03/2012): The LiveServerTestCase is now available in its own package called wsgi-liveserver.

Testing Exception Messages

The Python unittest module provides support for testing that an exception is raised using the assertRaises() method, but sometime we need to also test that the exception message is what is expected. Python v2.7 introduced the assertRaisesRegexp() method which can be used to test exception messages using regular expressions, but if you are stuck with v2.6 or earlier you will need to do something like:

import unittest


def raise_exception(yup=True):
    if yup:
        raise ValueError('Yup, exception raised.')


class BasicExceptionTest(unittest.TestCase):
    def test_message(self):
        try:
            raise_exception(True)
            self.fail()
        except ValueError as e:
            self.assertEqual(str(e), 'Yup, exception raised.')


if __name__ == '__main__':
    unittest.main(verbosity=2)

Looking at test_message() we first wrap the function we are testing (raise_exception()) in a try ... except statement to catch any exception that may be raised. If no exception is raised then we call fail() to signal that the test has failed. If the correct exception has been raised (in this case ValueError) we use assertEqual() to test that the exception message is correct. If an exception that we were not expecting is raised, then it will be handled by the TestCase class and the test will be marked as having an error. With this simple test pattern every possible outcome should be handled correctly.

If you plan to be writing a lot of these sorts of tests, then it may be worth creating your own TestCase class that provides an assert method for testing exception messages:

import unittest


def raise_exception(yup=True):
    if yup:
        raise ValueError('Yup, exception raised.')


class ExceptionMessageTestCase(unittest.TestCase):
    def assertRaisesMessage(self, exception, msg, func, *args, **kwargs):
        try:
            func(*args, **kwargs)
            self.fail()
        except exception as e:
            self.assertEqual(str(e), msg)


class MessageExceptionTest(ExceptionMessageTestCase):
    def test_message(self):
        self.assertRaisesMessage(ValueError, 'Yup, exception raised.',
                                 raise_exception, True)


if __name__ == '__main__':
    unittest.main(verbosity=2)

The assertRaisesMessage() method is very similar to the assertRaises() method except that it also takes a msg argument that will be used to compare against the exception message.

Both of these test patterns could also be extended to include the ability to use regular expression to test messages (similar to assertRaisesRegexp()), but I generally find that simple string comparisons are usually enough for my needs.

Dict Diff and Test Driven Development

I recently wrote a short function called dict_diff() that would take two dicts, compare them, and return another two dicts that contain only the differences between the original dicts (the code is available as a gist). It works something like:

dict_diff(
    {'a': {'ab': 12}, 'b': {'ba': 21, 'bb': 22}, 'c': {'cc': 33}},
    {'a': {}, 'b': {'ba': 21, 'bc': 23}, 'c': {'cc': 33}},
)

# outputs: (
#    {'a': {'ab': 12}, 'b': {'bb': 22}},
#    {'b': {'bc': 23}}
# )

I wrote it to make the output of assertEqual() a lot easier to read when dealing with large dicts that are not equal. It is a recursive function, but other than that it is fairly simple and nothing very special. What is different is that I wrote the function using test-driven development (TDD).

Generally when writing recursive functions I tend to get a bit caught up trying to ensure that the recursive part of the function works correctly from the beginning and lose sight of what the function is actually supposed to be doing. By knowing what the expected output would be ahead of time I was able to take a test-driven development approach and write the test cases beforehand, then just work my way through making all of the tests pass. By starting with the simple tests first and working my way through to the more complex ones it meant everything just fell into place and I didn't have to worry if I broke anything when I introduced the recursive stuff.

In the past I have tended to just write the tests in tandem with the code (sometimes before, sometimes after) and not really put a lot of thought into planning it all out with test cases. Being a simple function I knew what most of the results should be ahead of time without having to put much thought into it, but it was valuable to see how well this approach worked. I think I'll try to spend more time planning out my test cases to drive my development in the future.