Skip to content


A First Order Acceptance Test Automation Stack

Acceptance testing, in some cases known as BDD, has moved to its rightful place at the forefront of software development alongside user stories. Here’s an approach to understanding the full deal involved with modern acceptance test automation.

User Stories are the Foundation of Automated Acceptance Testing

Agile Stories come in a variety of shapes and sizes. There are very specific ones that cover fundamental aspects of an experience and others that cover a collection of behaviors. The following larger sized story, sometimes called an ‘epic’, might be created after many aspects of the service have been created. Typically a series of smaller stories collectively represent the intent of an epic story. The nice part about automated acceptance testing vs. unit testing is that you can test at the highest level of end to end functionality as well as very specific aspects of interaction.

 Here’s the simple template often used to ensure a complete and testable story:

As an <type of user or persona> I want <what you want to happen> So that <underlying reasoning>

Here’s a larger test related to resupply:

Keep Flavor in Stock:

AS AN ice cream vendor, I WANT to proactively resupply my customers’ favorite flavors  SO THAT I can ensure I don’t run out or exceed my storage capacity.

Proper agile user stories must include acceptance criteria which require automatic validation to keep pace with agile development. Without these criteria a user story cannot be mutually understood nor automated. Before adding criteria you could call them sketches or just backlog items. I recommend between 2 and 5 acceptance criteria to right-size the story. Too many and it can get confusing to understand and test as well as take the lion’s share of the sprint to build and test. Too few and it could be poorly defined, too specific or incompletely specified.

Here’s the simple template often used to ensure a complete ‘test’ criteria:

Given <context or current status> AND <more context> WHEN  <triggering action> AND <further action> THEN <Expected Result>

Criteria 1:

Given Remaining storage capacity of 20 barrels of ice cream

AND we’re averaging at least 5 customers/hour ordering Rocky Road

AND delivery time of 2 days,

When Rocky Road gets down to 2 barrels

Then System orders 5 more barrels of Rocky Road.

Criteria 2:

Given remaining storage capacity of 10 barrels of ice cream

AND delivery time of 5 days,

AND we’re averaging at least 2 customers/hour ordering Rocky Road

When Rocky Road is down to 1 barrels

Then System orders 2 more barrels of Rocky Road.

Tables can be a convenient format for exercising the variety of values matching your criteria:

Remaining Storage

Remaining Barrels

Delivery Time

Customers/Hr

Barrels  Ordered

30

2

2

5

5

30

5

1

2

2

10

1

7

2

5

What better way to provide story validation then to connect them with a suite of acceptance tests which automatically validate their criteria signaling their readiness or ‘doneness’. In order to be able to be implemented, they must be written with clear intent, a clear understanding of which specific part of the system they are testing and possess clear pass/fail criteria.  Test automation is more than just a collection of test scripts tied to a test runner, but an interoperating system of components.

Eight Aspects of an Acceptance Test Automation Stack:

  1.  Expressions of Intent. A Feature or Story containing acceptance criteria can be stored in documents on desktop or cloud, wikis, code or document repositories. These are what the acceptance tests validate. They need to connect in a way that can drive executable tests.  A variety of tools help make this happen: Pivotal Tracker, Jira, Confluence, any wiki, repository-based text/html files and google docs.
  2.  Guiding Templates for semi-formal language.  These are helpful for structuring acceptance criteria and test descriptions in a way which connects to the code that executes the intended tests. Cucumber has a bridge language cleverly called ‘Gherkin’ for this purpose. FIT and Fitnesse use ‘Fixtures’. Thoughtwork’s Twist has a GUI template to guide the author into the right language. Jira Behave Plugin does this too. Most popular is the Given <context>, When <action/event> Then < expected result> format. The expected context and result can be data sets represented directly using tables in many tools. Concordion is alone in allowing free form English by hiding the ‘instrumentation’ in html tags.
  3.  Organizing.  Categories/tags/relationships can help create order out of the multitude. Almost any ticketing app like Jira, Pivotal Tracker or Wiki like Confluence or Mediawiki allows you to tag items.
  4.  Choosing. Build suites for versioning and pacing. Decide which tests to run, when.
  5.  Managing Data. Seed test data and application state. Most tools have ‘setup’ routines embedded in each test.
  6.  Driving the Tests. Executes the test logic and drives the software. There is a lot of diversity in this part of the stack. Fundamentally there are these ways:
    • To Code  eg. (xUnit, FIT, and Fitnesse which comes with a wiki to present and edit test descriptions)
    • To Data  eg. JDBC protocol or JSON calls to validate data
    • To Network Protocols eg. REST or SOAP calls
    • To Simulated Browsers
    • To Actual Browser API eg. Selenium (common GUI browser driver)
    • To Desktop GUI eg. HP QuickTest/WinRunner)
    • To Mobile API eg. Calabash (Mobile driver)
  7.  Triggering or Scheduling Tests and Signaling Pass or Fail eg. build managers like (Jenkins) and IDE’s like (Eclipse)
  8. Gathering and Reporting Results to a text file, html, via an api or data file.
    1. Note: It is important that the test tool helps you understand how the failures relate to what’s wrong with the system efficiently or else testing will become a bottleneck or worse: the results will get ignored.

Ten Things To Look (Out) For:

  1. Requirements ‘Traceability’. or more accurately something to link the test to the intent of the product owner usually represented by a story. Concordion, Twist, Behave and Cucumber are principal examples here.
  2. Task and issue trackers. They have a fundamentally different purpose than repeatedly running tests. If an issue is closed, its tests should still run (regression), conversely, tests run multiple times should not change the status of the user story unless you want it to.
  3. Business Clarity of Intent:  Tests should be readable by product people and customers. An approachable and flexible syntax or tabular structure for handling input and expected data sets is important here to keep the business/domain experts from having to dive into code just to communicate their expectations or the inverse: Programmers having to slog through thick specifications or mysterious datasets in order to import them into the test code.
  4. Usability for business product people. In other words having to use a developer IDE to check-in documents is not an option unless every product manager is willing to do this on a daily basis
  5. Continuous Integration (CI): Provides for the tightest feedback loop possible on the completeness of your software extracting the most utility out of your test suite.
  6. Suites: Collecting and finding just the right combination of tests requires a good organizer
  7. Versioning: Often overlooked until too late, you want to track and run tests that match the version of software they are built for or else you will get false failures or false positives
  8. Segmenting: Separate quick tests from slow tests to get the fastest turnaround time. Work diligently to turn your slow tests into fast tests. You may need to change technology or architecture to do this which is a good thing
  9. Themes: Create suites to focus on a particular problem area
  10. Refactoring: This test automation stuff is real software and can get unwieldy real fast. Practice safe coding with TDD and refactoring

Further Reading

Dale Emory’s excellent article on Maintainable Tests.

A nice list of Javascript test tools and Web testing tools.

For really practical hands on instruction you can’t beat Gojko Adzic’s  Specification By Example with some great instructional video.

Lean Startup Hypotheses for challenging the assumptions about the problems you think you may have: Interview with Josh Seidan and an article by Barry O’reilly

Posted in Quality.


One Response

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.

  1. Facilitator says

    Many thanks to Mark Davis for the most recent improvements to the structure of the user story and criteria.



Some HTML is OK

or, reply to this post via trackback.