Testing Framework

Preliminaries

I usually try to separate the presentation from the logic. This allows for testing the logic without much thinking about the Webware. Thus, this page does not contain much about testing

Webware pages but concentrates on testing the model.

I've tried to automate the retrieval of Webware pages using cURL. This works quite good, but I don't yet have much experience with it. Please complement this page with your experiences.

Define test classes

Each class to be tested is accompanied by a test class (usually,

I use the name "Test" + class name for it). This class has

The setUp method prepares data structures necessary for testing, the tearDown method disposes any open file handles and so on.

Define a function returning the test suite

Each module contains a function called suite(). This function creates and returns a test suite containing all testcases for the module:

def suite():
    suite = unittest.TestSuite_()
    for test in [TestWorkList,
                 TestPersistentWorkList]:
        suite.addTest(unittest.makeSuite(test, "test"))
    return suite

Create a test driver module

In order to run all tests, I have a module test.py which looks like this:

import file
import work
import piece

# etc...

def main():
    suite = unittest.TestSuite_()
    for module in [file, work, piece]: # etc
        suite.addTest(module.suite())
    unittest.TextTestRunner_(verbosity=0).run(suite)

if __name__ == "__main__":
    main()

This module is rather low-tech; it would be easy to enhance it to load all available modules (maybe in a directory tree) and create the respective test suite.

Call this module from every class that should be tested

The last statement in every module is:

if __name__ == "__main__":
    import test
    test.main()

Run the tests whenever you changed something

This is very simple when using Emacs with Python mode. Just press Control-C twice, and watch the tests running by. You might want to tweak unittest.py so that it does not report successful tests - in this case Emacs just reports "no output" in the status line when all tests have run successfully - this is as unobstrusive as it gets ;-)

If you do not use emacs, you have several options: * use the graphical client (GuiTestRunner) of unittest * open a console and run the tests with python test.py * use your editor's capabilities to start external programs

When you use the console, you don't need to include the test module into each module, of course.

Testing the interface with cURL

If you want to test the pages as they are presented by Webware, you have to use a program which automates the retrieval of HTML pages. cURL (http://curl.haxx.se) does a good job. It can save and send cookies and has a good (command line) interface.

You can e.g. request a page with:

curl http://www.python.org

With the following line you can submit a query to search.python.org:

curl -d "qt=html&submit=Webware" http://search.python.org/query.html

The extraction and forwarding of cookies follows a similar pattern.

You can use curl either from shell scripts - request a page, compare it with a "known good source" and report any differences - or you can integrate it into your testsuite using pycurl (http://pycurl.sourceforge.net). This module provides Curl objects that can be configured to retrieve webpages. Unfortunately, its documentation is rather poor.

-- AlbertBrandl - 14 Mar 2002

It is also very useful to be able to automatically setup and tear down the WebKit environment + Servlets that will be used as part of a test suite. The regression testing framework that comes as part of WebwareExpRefactoring is very handy for this. It uses the builtin HTTPServer and thus can be run out of the box without requiring an external webserver to be installed and configured.

This framework uses Steve Purcell's HTTPSession class to do the same thing you're doing with curl directly from Python. It's trivial to write new test cases using this approach.

-- TavisRudd - 14 Mar 2002

What's the best way of dealing with highly stateful applications? For instance, websites with database backends, where most actions involve querying or modifying the database. Or sites you can't easily get to some functionality without going through a long path -- login, interactive feedback (even as simple as providing a list of options), etc.? I haven't done the testing I'd like, because it seems so difficult to set up tests in these environments... hints on doing it better? <br>

-- IanBicking - 14 Mar 2002

If possible you should enclose the state information. I use classes for the interaction with the database, another class for workflow information (which workpiece will be sent to which author after which action) and so on. A controller cooperates with the workflow manager in order to determine the next person responsible to update a document.

Since these classes only use a tiny subset of the webware framework (mostly response objects, sometimes transaction objects), it's very easy to write stubs for these. Python does not care if the signature of an object conforms to a certain interface ==> it happily accepts those stub objects in place of the real objects, as long as they provide the methods that are actually used.

The final acceptance tests are done by hand. But WebwareExpRefactoring looks rather promising to me - I'll certainly have a look at it before I start testing the next release.

-- AlbertBrandl - 15 Mar 2002

The combination of PyUnit and HTTPSession in WebwareExpRefactoring is a good fit for this sort of stuff. HTTPSession allows you to model a sequence of HTTP requests (including header and cookie management) and maintain state throughout. You can push cookies, manipulate GET and POST vars, and do practically everything else you can do with a web browser. If you're working with a database you could setup and tear down a test database as part of your unittests. The beauty of this framework is that you can control and manipulate everything from a single Python test module. There's no need to muck around with external POST files, etc. Have a look at http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/expwebware/Webware/WebKit/Test/

-- TavisRudd - 14 Mar 2002

Just an idea: Would it be possible to capture a real interaction with the webserver and use this for setting up acceptance tests that are then run with the above combination? I'm in search of a simple means to get the users create their acceptance tests.

-- AlbertBrandl - 15 Mar 2002

Intriguing... I'm thinking it would be easiest to hack WebKit.cgi, so that it recorded all the requests and response (maybe in serially-named files, which would be easy to inspect). Then you could put together a very simple little adapter that would work off those files, resubmitting the requests and either comparing, or simply recording the new responses.

A page-comparer would be a seperate development, hopefully robust against mere style changing. But even if it wasn't, it could check easily that no exceptions occurred, and maybe check that pages that shouldn't have changed didn't (for instance, if you have 50 pages, maybe a single change would only effect the output on a couple of them).

Growing your testing suite would be difficult in this case -- you'd have to add something to this adapter that would replay all the beginning events, then let you append more. Also, you'd have to deal with changes to the interface -- for instance, where you change the structure of a form -- so that you would replay only X events (up to where the interface has changed) and then manually continue.

But I really like this idea... it would make testing so much easier, even as a developer. I'm going to start on it right away.

-- IanBicking - 15 Mar 2002

Puffin is a very cool Python web app testing framework, actively developed by Keyton Weissinger. Would be nice to have a look at it.

-- KendallClark - 27 Apr 2002