Steve Freeman has been advising youDevise from our early beginnings. He and Nat Pryce have downloaded their ideas about test-driven development, mocking, and software design into a super book called Growing Object-Oriented Software, Guided by Tests, or GOOS for short.
Recently Steve, Nat, and Brian Marick gathered some very smart London folks to discuss the ideas in GOOS (and they let me in too). I joined the acceptance-test OpenSpaces group, which discussed these questions:
- Should acceptance tests use the user interface, or should they drive the domain objects directly?
- What practises do others use when writing and running acceptance tests?
- What problems do we encounter when using acceptance tests?
Here’s what we found in each area.
User Interface or Not?
The problem with UI-based testing, as we know at youDevise, is that UI tests can be slow and unstable, and asynchronous browser activity like AJAX behaviour can be annoying to capture and test. These problems can be overcome but the investment can be high. On the other hand, UI tests are often most understandable to customers, and can be really engaging – you can get viewers immediately oohing and aahing by showing a pointer moving around the screen to perform actions, as one UI test tool does during test playback. If you can then make a live change and rerun the test with new behaviour you’re likely to get lots of immediate useful feedback. See Resource-oriented testing below for a suggestion that can help provide both benefits.
BDD done right gives you a way to describe the user’s goals, not how the user achieves those goals – you can hide the how in the implementation of your test. Traditional tests (and acceptance tests done badly) are very specific about the actions (“click here”, “enter that”) rather than the goals (“choose a product”, “supply credit card details”).
After discussion of specific versus generic data (see below) Antony Marcano crystallised our thinking by describing the process of building a software product as an application of the scientific method. At first, you examine lots of specific examples of the behaviour you are interested in. Next you construct a theory (i.e. initial acceptance tests) that captures and describes some of this behaviour in a more generic way. Then you check this theory with reality through experiments – constructing more specific examples that you can validate with your product owner in working code. This will lead to changes to your theory, i.e. new acceptance tests, and you iterate until your understanding of the domain has developed into a mature and useful theory (i.e. a sufficiently complete set of acceptance tests).
Matt Savage described a method he’s used with success in his current team at Sky, which we decided to call resource-oriented acceptance testing. The first step is to make sure your application can be used in a RESTful style (Matt’s isn’t built to be used this way, so they use a clever shim tool called a “restifier” that converts REST-style requests to expected application actions). Next, provide two implementations of each RESTful action: one that uses direct HTTP (say, adding a new user by sending PUT to http://rest.example.com/user/new) and another that uses a tool like Selenium to perform the same action visibly (navigate to http://www.example.com/user, enter field values, and click [Save]). Now (assuming you are using the standard Given/When/Then style), implement the Givens and Whens of your acceptance tests in terms of the resource-altering verbs PUT, POST, and DELETE, and implement the Thens using GETs. Finally, plug in the direct HTTP version of the actions when you want speedy, non-visual tests (which should be most of the time), and use the visible client-based tests when you want to see the results or test in a browser (e.g. for demonstrations or browser compatibility tests).
Matt finds this style provides the best of both worlds: speedy developer tests and comprehensible customer demonstrations. Further, it can force positive changes in the product – for example, when testing one feature, Matt found there was no way to get certain information about search results without actually loading the page in a browser. Making the missing information available as a resource allowed the feature to be tested and provided a better user experience to real browser users as well when it was incorporated into the page.
Problems in Using Acceptance Tests
Antony points out something that can get lost in BDD tests – particularly the resource-based ones Matt described: the role the user has. You can focus on your resources and actions and lose sight of how the activities should be organised into groups and assigned to types of users.
We had some debate about the use of data in tests (see also the Scientific Theory section above). I find it very useful to have realistic examples (“Given a customer phone number of 01555 555 2372”) because they are most meaningful to users and so help us converse about what they need, as well as helping future maintainers learn about the domain quickly. However Matt finds that once developers understand the domain better, they can ditch the examples and describe the rules for generating them (“Given a fixed-line customer phone number on the Manchester exchange”, which translates in the implementation into “number beginning with 01555 not in the list of mobiles”). This allows them to stress their system with random valid data and find more edge cases quickly (though you have to have really good diagnostics and the ability to replay a test with specific input to make debugging the failures possible).
Thanks to all the participants in the group, notably Priya Viseskul, Matt, and Antony.