I strongly agree with Cem Kaner's statement (in Inefficiency and Ineffectiveness of Software Testing: A Key Problem in Software Engineering) that: sometimes tests uncover defects that don’t fit within any coverage model because they are side effects of the tests rather than explicitly planned foci of the tests."

My experience indicates that an effective way to increase the likelihood that you will trigger such defects (without explicitly looking for them) is to try to maximize the variation between each test case you execute.

A case in point: when I sat down to dinner with James Bach a year or so ago in Boston at a testing conference, he gave me a quick testing challenge (as he is fond of doing with testers he meets for the fist time to see how we think). He asked how I would test a very simple calendar entry application that allowed users to record the start and end times of diary events. Key inputs to use as test conditions for these tests included start times and end times.

I proposed a set of times to try that were designed to provide as much variety as possible from one test case to the next. As inputs into the start and end times, I used a small number of different times spread throughout the morning, afternoon, and evening as well. The strategy I used quickly identified the testing defect the puzzle was designed to uncover in a small handful of tests. What was most memorable about the experience from my perspective was not that I "succeeded" in triggering the bug but that the tests I created triggered a type of bug that was, in Kaner's words, a "side effect of the tests rather than explicitly planned foci of the tests."

The business logic in the calendar application that should have identified invalid beginning and end time combinations was coded incorrectly. Instead of using numbers in the business logic, the business logic was ordering the numbers alphabetically. I was not consciously looking to identify that kind of a flaw in the business logic, but by maximizing the variation from test case to test case, I maximized my odds of finding it.

Efficiently achieving structured variation is difficult because it is hard for a human brain to remember whether dozens of different test conditions have been tested together (or we're accidentally repeating ourselves). This is where pairwise and combinatorial test case generating tools like our Hexawise tool come in. They are designed to achieve as much variation from test case to test case as possible. One of the relatively unsung benefits of this approach is that doing so will help find bugs, like these, that you aren't even consciously looking for.

 

Related: 3 Strategies to Maximize Effectiveness of Your Tests - Getting Started with a Test Plan When Faced with a Combinatorial Explosion - Book Review of “Explore It!” Elisabeth Hendrickson’s Excellent New Book on Software Testing

By: Justin Hunter on Feb 26, 2013

Categories: Software Testing, Testing Strategies