What is the Coverage Matrix?

Some of the most challenging questions testing teams are asked include:

  • Are we testing enough?
  • Are we testing too much?
  • What is the level of testing coverage these tests achieve?
  • What if we get extremely pressed for time… What level of coverage could we achieve in half as many tests as we have planned?

Hexawise now allows you to visualize testing coverage more precisely than ever. The precise pairwise interactions covered by Hexawise-generated tests are displayed in the Coverage Matrix. This is different than our Coverage Graph, which allows users to see the additional coverage each test in their test set provides but does so without providing as much granular detail.

 The Hexawise Coverage Matrix is a grid made up of all the pairs in the test plan being analyzed. With each value listed down the side and also across the top, you can find an intersection and see if that specific pair is covered at any point in your set of Hexawise-generated tests. Read on for the specific features of this reporting function.

The Coverage Matrix in action

Before we dive into the specifics of the Coverage Matrix, let’s see it in action.

If we have a test plan and have created some tests, we need to go the ‘Analyze Tests’ screen and click on Coverage Matrix. I recommend using a larger test plan (5+ parameters with a handful of values each) to truly see the power this visualization has.

 

 

‘OoooOOOooo’ snazzy new layout. That’s right, we’re taking it up a notch!

Once you click on the Coverage Matrix, the chart goes through an animation showing the coverage each tests achieves.

Things to look for in the animation

  • Just beneath the slider over the chart, you can see what percentage of interactions is covered (just like the Coverage Graph)
  • Also underneath the slider, you can see the precise test that is being displayed
  • The Coverage Matrix starts fully red (0 tests equals 0% coverage)
  • As each test is added, the coverage increases turning each pair covered into a green square
  • The slider at the top can be used after the animation is complete

  

How to read the Coverage Matrix

Now let’s get into the specifics of this feature.

What the squares mean

The Coverage Matrix has 3 primary indicators for coverage:

  1. Red Squares: A pair not currently covered
  2. Green Squares: A pair that has been covered
  3. Black Squares: A pair that will never be covered

  

Red Squares: The Coverage Chart starts off all red, since at 0 tests, 0 pairs are covered. The number of red squares will decrease with each subsequent test as coverage increases with each test. At any given test, the amount of red squares is the equivalent to the amount of pairwise gaps.

 

Green Squares: As each test is added, more pairs are covered. The squares convert from red to green. If you were to stop executing Hexawise-generated tests before the final test, you would leave gaps in your pairwise coverage. The green squares show you what you will have covered if you stop early.

 

Black Squares: If you add invalid or married pairs to your test plan, you are intentionally removing pairs from ever appearing in your test set. Usually that will be because such combinations are impossible or impractical to test together (such as O/S = Mac OSX with Browser = Internet Explorer). The pairs that are removed are shown as black squares in the Coverage Matrix.

 

How the Coverage Matrix is laid out

Your parameter values are listed down the side and across the top. The intersection of the parameter values is the coverage status for a particular pair of values.

This is a Coverage Matrix

 

Your parameter values

In order to make for a nice display, the parameters are listed down the side from the first to the second to last, and then across the top from second parameter to the last parameter.

Example: If the parameters in the plan are ‘A,’ ‘B,’ ‘C,’ ‘D,’ and ‘E,’ you will have a Coverage Matrix that looks like this one:

 

Using the Coverage Matrix

Since each Hexawise-generated test includes many pairs (or sometimes just a few) the test designer is sometimes unaware of when each pair is covered. The Coverage Matrix allows the tester to visualize and communicate when a precise pair is first covered in their test set.

 

Let’ take a detailed look at exactly how the Coverage Matrix accomplishes this.

 

For this example, we have a Mortgage Application system being tested. We achieve 81% pairwise coverage after just 8 tests. What kind of pairs do we see being covered by our 8th test.

 

We also see there are sporadic red squares, since we are only viewing 80% coverage of all possible pairs. So we review this chart with our stakeholders, and they like it. But then they see towards the bottom that Region 2 is not tested with Investment Property.  They tell you that situation needs to be covered, and ask how we’ll cover it.

 

By opening Hexawise, we can drag the slider to see when that red square turns green.

  

In this case, it’s the very next test: test 9. So we alert the stakeholders that we can cover that scenario by executing 9 tests, which bumps our overall coverage to 86% pairwise coverage.

 

In other similar situations, a given “high-priority” combination might not appear until significantly later in a plan. If a stakeholder-requested combination did not appear until near the end in this example, we might be best off executing the first 8 Hexawise tests and then addressing that stakeholder-requested combination in a one-off test outside of Hexawise. (There are “fancier” options for achieving that goal (such as using Hexawise’s “freeze” function), but that’s a different topic for a different blog post. This one is lengthy as it is.

 

Related Considerations

Anyone still reading at this point deserves a couple bits of extra insight:

First, it is important to keep in mind that Hexawise orders your tests in an optimal way so that if you stop testing after any given test, you will have covered as much as possible within the amount of time you have had to test. You can see this by comparing the (much larger) number of squares that turn green in your first test to the (much smaller) number of squares that turn green in your final test.

Second, if you are discussing the Coverage Matrix with a stakeholder who is new to Hexawise’s optimized coverage approach, they might need a brief explanation about why pairwise testing coverage is such an effective starting point for test prioritization. Introductory articles such as “Why do most software tests suck?” and “Does pairwise testing really work?” might be helpful. Additionally, it is equally important to remember that high priority scenarios that need to consist of more than 2 values should also be included in your test sets. If a stakeholder suggests including an involved scenario in your tests, keep in mind these two things. You can easily force such scenarios to be included in your test sets using Hexawise’s Requirements Feature. Those seeded high-priority scenarios will appear in the very first tests in your Hexawise test set; you do not need the new Matrix Chart to confirm that they are being included in your test set.

A Few Final Thoughts

This is one of our favorite Hexawise features we have ever released. We’re excited to make it available it and truly hope you find it to be useful. We are constantly seeking new ways to make it easier for software testers to not only create unusually powerful, thorough, efficient, and effective test sets but also new ways to help testers and stakeholders communicate clearly about the software testing coverage that their tests are achieving. We hope that the new Coverage Matrix feature will make it easier for you to communicate the superior testing coverage you’re achieving when you discuss your Hexawise-generated tests with stakeholders. 

By: Jordan Weck on Jan 29, 2015

Michael Bolton is one of the software testing industry's deep thinkers. He has an impressive ability to logically analyze testing problems and clearly explain complex issues. 

I like how Michael summarizes what people often really mean when they say "it works"*

 

There's a lot of truth in those words, isn't there? I've shared these words from Michael in test design trainings I've done recently and found that they immediately resonate with quite a few testers. It seems that anyone who has been in testing for more than a while has seen teams of testers test a feature or function a bit, declare that "it works," only to discover later that the feature/function works in some situations but does not work in other situations.

What's a tester to do? We recommend testers use two deliberate strategies: use a rich oracle and cover critical interactions.

First, use a "rich oracle" to enage your brain more actively and train your eye to better recognize potential issues. Imagine the following scenario. 3 people are in a room. The first person, a guy plucked from the street outside at random, is given a set of 10 written test scripts to execute and told to follow the test scripts, step-by-step in return for a six pack of beer. Being a fan of beer, and endowed with the dual-abilities of being able to both read instructions and follow instructions, he performs what is asked of him dutifully.

In the room are two testers who are allowed to observe the tests being executed but who are not allowed to communicate.

  • The first tester has a list of ten numbers, each with three boxes for checkmarks. He operates in a world of black and white where if the documented "Expected Result" is consistent with what they observe, they write a green check mark. If the "Expected Result" is inconsistent, they write a green "X."
  • The second tester operates differently. She goes beyond. She goes deeper. She notices subtle things along the way that look unexpected, or not quite right. She makes notes of those things. In doing so, she thinks of new test ideas that have not been executed yet. She documents those test ideas to explore further later, provided there is time.

I think you see where I'm going with this. My point is that the more curious approach adopted by the second tester is a far more valuable one to people who care about software quality. Why is this? Here too, Michael Bolton has words of wisdom to share that resonate well:

 

Second, testers should adopt test case design approaches in order to avoid the "under some conditions... once" risk. One of the most important benefits of using our Hexawise test case design tool is that, even with very basic pairwise test sets, every feature or function you test will automatically be tested multiple times. And under as many different conditions as possible in the time you have available.

After close to 10 years of introducing new groups of software testers to these types of test design approaches, people have a hard time believing how big efficiency and thoroughness improvements in this area are. That's why we always strongly encourage teams using our optimized test case selection approach to do apples-to-apples comparisons of coverage and defect-finding effectiveness. We work with teams to compare the coverage gaps of their existing "business as usual" test sets to how thorough they are when Hexawise is used to generate an optimized set of tests. Images of one recent coverage analysis is shown below. Data on defect-finding effectiveness and defect finding efficiency improvements resulting from optimized test case selection can also be found here.

 

 

 

*With thanks to Jon Bach for sharing this on his blog.

By: Justin Hunter on Sep 25, 2014

We have mentioned George Box before. He was an amazing person, scientist and statistician. One of the traditions George started in Madison, Wisconsin was the Monday Night Beer Sessions.

An excerpt of Mac Berthouex’s introduction to An Accidental Statistician: The Life and Memories of George E. P. Box:

I met George Box in 1968 at the long-running hit show that he called “The Monday Night Beer Session,” an informal discussion group that met in the basement of his house. I was taking Bill Hunter’s course in nonlinear model building. Bill suggested that I should go and talk about some research we were doing. The idea of discussing a modeling problem with the renowned Professor Box was unsettling. Bill said it would be good because George liked engineers.

Bill and several of the Monday Nighters were chemical engineers, and George’s early partnership with Olaf Hougen, then Chair of Chemical Engineering at Wisconsin, was a creative force in the early days of the newly formed Statistics Department. I tightened my belt and dropped in one night, sitting in the back and wondering whether I dared take a beer (Fauerbach brand, an appropriate choice for doing statistics because no two cases were alike).

I attended a great many sessions over almost 30 years, during which hundreds of Monday Nighters got to watch George execute an exquisite interplay of questions, quick tutorials, practical suggestions, and encouragement for anyone who had a problem and wanted to use statistics. No problem was too small, and no problem was too difficult. The output from George was always helpful and friendly advice, never discouragement. Week after week we observed the cycle of discovery and iterative experimentation.

Justin, at a meet up with a few testers in Nottingham, out of a desire to do something nice for some testers in the community found himself buying beers for a few testers. The organizer asked Justin to put a couple slides together, then he posted them on Twitter and thanked us.

graphic brochure for Hexawise buys the beers


Justin made a similar offer to attendees at StarEast. And as luck would have it, the first guy to respond was Alan Page who was giving the keynote speech at the conference. Alan sent out tweets with showing testers getting and sharing a few beers.

screen grab of tweets from StarEast of testers enjoying Hexawise Buys the Beers

And CAST2014 didn't miss a beat. 

Now #HexawiseBuysTheBeers has become a way to encourage comradery among software testers at conferences and another small way the legacy of George Box lives on.

By: John Hunter and Justin Hunter on Sep 1, 2014

We have written before about the general question of "Should I Use One Test Plan or Multiple Plans?" This post addresses the same question with a focus on plans that have a relatively large percentage of constraints (e.g., a relatively large number of Invalid Pairs and/or Married Pairs).
 
Hexawise creates a Plan Scorecard that analyzes every set of tests Hexawise users generate. The Plan Scorecard exists to help you identify potential problems with plans you create and to make you aware of possible ways to improve your sets of tests. One of the notifications the Plan Scorecard provides goes like this:
 
Consideration: "53% of the parameter values are directly or indirectly constrained."   
Explanation: Test plans with more than half of the parameter values constrained are often trying to do too much. They may be better broken into more than one test plan.
 
Constraints are used to prevent "impossible to test for" scenarios from being generated by the Hexawise test case generator. For more information about entering constraints into Hexawise, see these explanations of Invalid Pairs and Married Pairs. If you do not use Invalid Pairs or Married Pairs in a plan, 0% of your parameter values would be constrained.
 
What should you do if you get a notification that your plan is highly constrained? Simple: consider your options. Specifically, consider the pro's and con's of splitting the plan you're working on into multiple separate plans.
 
 
Why can heavily-constrained plans be a problem?
  • A number above 50% or so indicates that it might make sense to consider breaking your plan into 2 or more plans. 
  • Why? Because with more and more constraints in a plan, keeping track of them all, making sure they're all accurate, and making sure the constraints in certain parts of your plan do not conflict with constraints in other parts of your plan in unintended ways, can start to take a lot of mental energy. 
  • Furthermore, if your constraints do begin to conflict with one another, that could make it impossible for the Hexawise algorithm to identify valid values to populate in some parts of some of your tests. When that happens, instead of an actual value appearing in a test case, you will see the words "No Possible Value" appear.
 
Why are multiple simpler plans often preferable to one more complex one?
  • It is often much easier and quicker (from a modeling standpoint) to create two different plans. Creating two separate plans instead of one single plan often makes it possible to eliminate the need for the majority of the constraints in your plan.
    • For example, if you had a pizza ordering application where there were a lot of constraints around the value "meat pizza" and a lot of constraints around the value "vegetarian pizza" it could be attractive to create one plan (e.g., one set of tests) for meat pizzas and a different set of tests for veg. pizzas.
  • Simpler plans with fewer constraints tend to be easier to understand, modify, and maintain.
 
 
What are practical considerations when splitting a single plans into multiple ones?
  • To determine where / how to split a plan, begin by asking "what values have the most constraints associated with them?"
    • In the example above, "meat pizza" and "veggie pizza" had the most constraints; creating one plan for meat pizzas and a separate plan for veggie pizzas was the way to go. It would not have made sense to split the plan into one plan with scenarios involving transactions paid for in cash and a different plan with scenarios involving scenarios paid for with credit cards if types of payment type did not have many Invalid Pairs or Married Pairs associated with them.
    • We were recently talking to a client recently where 58% of their plan's parameter values were constrained. We helped them look at where most of the constraints were coming from. It turned out that "Timing of Loan Payment" was the main culprit. As a result, we suggested they consider three separate plans; one plan for Delinquent Payment Scenarios, one for Regular/Timely Loan Payment Scenarios, and one plan for Loan Pre-Payment Scenarios.
    • While working with another client that was dealing with a highly constrained plan, "Type of User" was the source of most of the constraints. Super-Users were allowed to perform all kinds of activities on the System Under Test. "Regular Users" were able to perform a far more limited number of actions. It made sense in that case to break the original combined plan into two separate plans; one plan for Admin User Scenarios and one plan for Regular User scenarios.
  • After determining where to split a plan, the next steps tend to be relatively straightforward:
    • If you're starting with one combined plan and want to break it into two plans, we would recommend these steps:
    • Start by creating 3 copies of the same plan:
      • Make a copy of the original combined plan so you can easily go back to an earlier known version if things start to go horribly wrong (or if you realize that the multiple plan strategy results in the creation of significantly more tests than the original single plan version)
      • Make a copy that you will modify for, e.g., "Regular User Scenarios"
      • Make a copy that you will modify for, e.g., "Admin User Scenarios"
    • Take advantage of Hexawise's Bulk Edit feature and tailor each plan as needed. 
      • Delete any unnecessary Parameters, Invalid Pairs, Married Pairs, Requirements, and Expected results
      • Add high-priority scenarios as necessary
 
What disadvantage might there be to multiple plans?
  • A potential disadvantage to a multiple plan approach is that it sometimes results in more tests generated than a single test plan approach would.

By: Justin Hunter on Aug 14, 2014

Hexawise helps team achieve the following qualitative benefits:

  

Not all of the benefits above can easily be quantified. So what should you do if you are tasked with creating a business case to support adoption of Hexawise? Simple:

  • Measure what happens when you create ~50 tests with your business as usual process; compare that to how long it takes to generate a set of ~25 tests with Hexawise.
  • Measure how long it takes to execute each sets of tests.
  • Measure how many unique defects you find executing each set of tests.
  • For typical findings, see these empirical benefit measurement studies.
  • Then put together a business case summary like the one below.  
    • The summary focuses primarily on the objectively measurable efficiency savings you measured.
    • The summary also breaks out valuable qualitative benefits into separate line items (to prevent those valuable - but difficult to quantify - benefits from being forgotten about).

 

By: Justin Hunter on Aug 11, 2014

I run into the same problem quite often: people have a hard time distinguishing between good and bad tests.

So what are ‘bad tests’?

Bad tests are those that don’t make the tester learn as much as possible with each test step. Said another way: bad tests are repetitive.

Repetitive tests are time wasters. They make testing a mundane task to be completed. Repetitive tests  don’t emphasize the complexity inherent in most systems today. And they let things (read:  bugs, defects, faults) slip through into production.

Experiment Time!

Question: How bad are bad tests?

Research: (not much out there)

Hypothesis: Bad tests are deceptively bad.

Experiment:

  1. Choose some tests.
  2. Model their ideas in Hexawise.
  3. Lock-in bad tests as Requirements.
  4. Find out how many of the interactions bad tests cover.
  5. Then see how many tests are needed to cover all pairwise interactions.

Step 1: Choosing Tests

I took some tests that some testers all agreed cover the functionality that needed to be covered for the story they were testing.

Step 2: Test Designing

I sat down and analyzed them. Repetitive as any I’d seen. I sifted through them to pull out the main testing ideas. These I would use as parameters and values. I entered these into Hexawise. 

Step 3: Lock-in Bad Tests

I used our Requirements feature to lock-in their repetitive tests. So Hexawise would be forced to use their tests first (this would allow me to model how many interactions each of their tests covered)

Step 4: Analyzing Bad Tests

This is the chart Hexawise produced. In 30 tests, they covered 47% of the total possible pairwise interactions.

Before we go on: Do you notice that there are little plateaus? From Test 5 to 6, 7 to 8, 14 to 15, 20 to 21, 22 to 23, 26 to 27, and 29 to 30?  That means those were tests that did not include ANY new pairs. No new information could have been learned from those tests. Learning literally plateaued. 7 times. Nearly 1 out of 4 tests didn't teach the testers anything. 

Then I unleashed the Kraken Hexawise.

Step 5: Let Hexawise Optimize

I removed those Requirements. See how many tests Hexawise needs to cover all of the interactions in this specific functionality.

 

Okay, to be honest, I wanted Hexawise to do it in like 20 tests. (More Coverage. Fewer Tests.) But it used 30 (More Coverage). BUT (and this is a big BUT *snickers*) Hexawise covered 100% of the pairwise interactions in 30 tests. 

Lessons Learned

No one would have guessed their tests were that bad by just reading through them. They looked perfectly fine. They read like good test cases. But as we started to visualize their coverage, we saw that perhaps they weren't achieving all they could. And when we compared bad tests to Hexawise tests, more coverage (in the same amount of tests) is a clear winner. 

In short:

  • Bad tests are deceptively bad
  • Sometimes you have to prove it
  • Pairwise tests can alleviate bad-test-initis

By: Jordan Weck on Jul 2, 2014

Hi, my name is Jordan Weck and I’m a combinatorial tester. And more importantly, probably, I’m the Vice President of Customer Success here at Hexawise.

(photo credit to the built-in on my MBP)

I’ve wanted to write a post for a while now, but always found reasons not to. Skipping over those. It’s time to throw my hat into the ring. And thanks to my colleagues, they put together a nice little Q&A to get to know me.

How long have you known about combinatorial testing?

My history with combinatorial testing goes all the way back to 2009. It was mind blowing to me that pairwise testing wasn’t more broadly adopted when I learned about it. Combinatorics aside, the approach of designing experiments as to glean as much information as possible has been around for quite some time. So long in fact that textbooks had been written about it.

Why doesn’t everyone know or use combinatorial testing?

Well there are a lot of approaches to software testing. I understand everyone has their own approach. Some think with models. Some look only at requirements. Some are just smokers to keep their sanity. Some will look to just accept some compatability. Some are debating what color to paint their box. Some others are pure chaos (you know who you are). With so many options, people tend to get depth of experience in one approach and eschew others.

Do you think combinatorial testing is gaining traction?

Over the years I’ve seen combinatorial test design, or pairwise testing, increase in popularity. Like light bulbs slowly coming on across a town once shrouded in darkness. Are we burning brightly yet? I think there is a long way to go.

What are some of the roadblocks to adopting Hexawise?

I wouldn’t say we have roadblocks to adopting Hexawise. Rarely is the roadblock the tool itself. We live in a time of fast-paced learning. The testers today can pick up a tool just like a new app on their phone. But what makes them come back is seeing the relevance of the solution to their problems. That’s where the roadblock lies: understanding you have a problem with your current way of doing things. Most testers don’t know that their tests could be better. A lot don’t care either way. They want their work done faster and more efficiently. So do their bosses. And that is what we provide them.

What is customer success for Hexawise?

While I don’t want to get stuck on any one definition, it sounds something like: Get value out of using Hexawise, increase the amount of value per user, and get more people to use Hexawise (to begin that process anew).

How do you ensure customer success for Hexawise?

Educate users. Coach them on best practices. Clearly explain the concepts. The broader the usage, the more value they get, and the more successful the customer becomes.

Those who get combinatorial test design know they are getting a lot of value out of using fewer tests or more powerful tests. We at Hexawise focus our success efforts on those that don’t understand the pairwise approach or those that just use it every once in awhile.  We couple this approach with providing the tools to the users who do get it to spread the idea of combinatorial testing more broadly. 

And that’s a wrap…

Thanks for reading this intro blog post to my role as customer success liaison at Hexawise. I look forward to working with you all!

By: Jordan Weck on Jun 26, 2014

Begin with the “Goldilocks Rule” in mind to identify how much detail is appropriate.

graphical representation of goldilocks rule

If your tests cover a large scope, as in a set of end-to-end tests of a process, you focus first on entering only the most important elements of that process into Hexawise. If your tests cover a small scope, as in tests that focus on a few items on a single screen, the amount of detail you will want to include in Hexawise is higher. Learn more about the Goldilocks Rule.

Imagine explaining your System Under Test to someone’s mother. Start with 5 things that may change in each test

Suggestion: do not start this process with detailed Requirements or Technical Specifications. Instead, start with your basic common sense description of some things that would change from test to test.

  • If you were explaining the application to someone’s mother, how would you explain what it does in 2 minutes?
  • What kinds of things would be important to vary from test to test?
    • Hardware and software configurations?
    • User types?
    • Different actions that a user might take?
  • Identify 5 things that change from test to test and turn those 5 things into your first Parameters.
    • How might those things change? Add one or more Values for each Parameter.
    • At this point general descriptions might be fine; (e.g., SUV’s or Economy cars vs. Toyota Corolla).
    • Remember that, where possible, you should avoid creating long lists of values.

 

Create a draft set tests, assess obvious big gaps in the tests, and start filling them.

What obvious types of scenarios are missing? Add parameters and values as necessary to fill those big, obvious holes.

Create tests again, assess whether you’re covering necessary business rules and requirements.

If your tests are not yet testing a business rule or Requirement that you want to test:

  • Consider adding a Parameter or Value
  • Consider adding a specific combination of values to be tested together in the requirements tab

Reduce scope if the test plan is too complex

  • Consider cutting the scope of the plan (e.g., create two different plans with largely similar parameters – one for regular users and one for special users – instead of one big plan which tries to “do it all.”)
  • Consider changing the way you are describing “hard coded” values. Instead, of “iPhone 4S with International roaming” (which might not be a valid option after the first part of the draft test case suggests a transaction for a phone for corporate customers from a Northern location responding to the special holiday offer…) consider using descriptive parameters and values along the lines of “from the available options of phones available at this point in the test case, select an option that meets as many of these conditions as you can…

Consider adding additional details into your plan from other sources.

Other sources for test ideas could be:

By: John Hunter on Jun 17, 2014

The Hexawise Software Testing blog carnival focuses on sharing interesting and useful blogposts related to software testing.

 

  • The Zen of Application Test Suites by Curtis “Ovid” Poe – “This document is about testing applications — it’s not about how to write tests. Application test suites require a different, more disciplined approach than library test suites. I describe common misfeatures experienced in large application test suites and follow with recommendations on best practices.”

 

  • The Software Tester’s Easter Egg Hunt by Ben Austin – “The testing industry will benefit greatly if more people follow Holland’s example and prioritize critical thinking over the ability to write test scripts. That said, it is equally important to recognize that scripting tools, when used in the situations they’re intended for, can be great time-savers that can enable more thoughtful, context-driven exploratory testing.

 

 

  • Testing at Airbnb by Lou Kosak – “Building good habits around testing is hard, especially at an established company. One of the biggest lessons I’ve learned this year is that as long as you have a team that’s open to the idea, it’s always possible to start testing (even if you’ve got a six year old monolithic Rails app to contend with). Once you have a decent test suite in place, you can actually start refactoring your legacy code, and from there anything is possible. The trick is just to get started.”

 

  • Disruptive Testing – James Bach Interviewed by Rodney Urquhart – “the future I am helping to build is about systematically training up skilled testers, some of whom but not all with coding skills, so that a small number of testers can do– or coordinate to be done– all the testing that a large project might need. A good future for testing would be one with a lot fewer “testers” but each one of those testers being passionate about his craft.”

 

  • The role of a Test Manager in agile by Katrina Clokie – “In a cross-skilled team, the agile tester must ensure that the quality of testing is good regardless of who in the team is completing it. The tester becomes the spokesperson for collaborative testing practices, and provides coaching via peer reviews or workshops to those without a testing background.”

 

  • Speaking to Your Business Using Measurements by Justin Rohrman – “In my experience, no one measure did a great job of telling the story about my software ecosystem. I’ve been deceived by groups of measures, too, because I misunderstood their weaknesses. If we are so easily deceived by measurements, imagine what happens when we send them off to others who need quick, high-level information.”
photo inside a temple

Photo in Ranakpur India by Justin Hunter.

  • Shine a light by Rob Lambert – “Tours, personas and a variety of other test ideas give you a way of re-shining your light. Use these ideas to see the product in different ways, but never forget that it’s often time that is against you. And time is one of the hardest commodities to argue for during your testing phase.”

 

  • Using mind-mapping software as a visual test management tool by Aaron Hodder – “I like to employ a style of software testing that emphasises the personal freedom and responsibility of the individual tester to continually optimise the quality of his/her work by treating test-related learning, test design, test execution and test result interpretation as mutually supportive activities that run in parallel throughout the project. When performed by a skilled tester, this approach yields valuable and consistent results.”

 

  • Helpful Tips for Hiring Better Testers by Isaac Howard – “I looked at the good testers around me and tried to identify the “whys” of their success. All of them were driven to learn and capable of adapting to change. If they didn’t know a tool or a tech, they learned it. Because under the hood, testing is learning and relearning software everyday. The following are seven changes I made to my interviewing process.”

By: John Hunter on Jun 10, 2014

Here at Hexawise, we aim to make the process of testing easier and more efficient. One way in which we've done this is by promoting our test design tool. But we were seeing people make test plans too complicated. So we came up with an easy way to create super powerful test plans that stay simple and effective.

What did we come up with, you ask? 

'Start with a verb and a noun.'

The idea of using a verb and a noun to describe the appropriate scope for a set of tests has been used by Eduardo Miranda.  As he points out, if you find yourself tempted to add testing ideas (e.g., explore the help files in depth) that do not easily fit into your chosen verb and noun (e.g., "book a flight"), that can be a useful red flag; accordingly, you might want to exclude those new test ideas that "don't fit" from the test scenarios in your current scope of tests.

This strategy is useful for two main reasons.

One - It is a great leaping off point from which to create interesting scenarios.

The tester is forced to understand and question their system under test. For some, this is a radically different idea to what their job is. We typically hear something like "You mean, I can't just 'validate the file exists?'"

Two - The 'verb and noun' strategy requires you remain specific to one common goal.

Test plans get bloated when you start incorporating disparate ideas. This is commonly seen when testing a system that would be described as 'Apply for Loan' and you start adding in ideas to 'explore help files.' While exploring the help files will be necessary at some point, it probably will not trigger results needed for successfully testing your application process.

Now, let's explore this first reason:

You start by choosing any verb and noun.

 

Then you have to create questions to understand that verb and noun. 

Then answer your questions. This is important. If you can't answer them, how could you possibly test the system?

These ideas of questions and answers lend themselves quite well to be used as test steps or scenario planning. Below you can see how well they imported into Hexawise.

Generating tests is the only thing left to do before testing.

Hopefully you've enjoyed this exploration on making simple and effective tests using the 'Verb and Noun' process.

By: Justin Hunter on Jun 4, 2014