Here at Hexawise, we aim to make the process of testing easier and more efficient. One way in which we've done this is by promoting our test design tool. But we were seeing people make test plans too complicated. So we came up with an easy way to create super powerful test plans that stay simple and effective.

What did we come up with, you ask?

'Start with a verb and a noun.'

The idea of using a verb and a noun to describe the appropriate scope for a set of tests has been used by Eduardo Miranda. As he points out, if you find yourself tempted to add testing ideas (e.g., explore the help files in depth) that do not easily fit into your chosen verb and noun (e.g., "book a flight"), that can be a useful red flag; accordingly, you might want to exclude those new test ideas that "don't fit" from the test scenarios in your current scope of tests.

This strategy is useful for two main reasons.

One - It is a great leaping off point from which to create interesting scenarios.

The tester is forced to understand and question their system under test. For some, this is a radically different idea to what their job is. We typically hear something like "You mean, I can't just 'validate the file exists?'"

Two - The 'verb and noun' strategy requires you remain specific to one common goal.

Test plans get bloated when you start incorporating disparate ideas. This is commonly seen when testing a system that would be described as 'Apply for Loan' and you start adding in ideas to 'explore help files.' While exploring the help files will be necessary at some point, it probably will not trigger results needed for successfully testing your application process.

Now, let's explore this first reason:

You start by choosing any verb and noun.

Verb and Noun 1

Then you have to create questions to understand that verb and noun.

Nwspaper questions

Then answer your questions. This is important. If you can't answer them, how could you possibly test the system?

Answer the questions

These ideas of questions and answers lend themselves quite well to be used as test steps or scenario planning. Below you can see how well they imported into Hexawise.

Add those into Hexawise

Generating tests is the only thing left to do before testing.

5 Click on Create Tests

Hopefully you've enjoyed this exploration on making simple and effective tests using the 'Verb and Noun' process.

By: Justin Hunter on Jun 4, 2014

Categories: Combinatorial Software Testing

In general, all scripts (test cases) should have the same steps when using the Hexawise Auto-Scripts feature.

An important consideration in a Hexawise test plan is, "Can I test all these test cases with the same number of steps?" If the answer is no, then you should probably reconsider the scope of your test plan, as it's likely you're trying to include too much testing scope in a single plan.

Another rule of thumb to determine what should be included in the scope of one set of Hexawise tests is to think about a verb and a noun. "Apply for" could be your verb. "A loan" might be your noun. If you have a lot of permutations in which a person could apply for a loan, those scenarios could all fall within the scope of that set of tests. If, however, you started to think about testing the contents of help files, it may well be useful to include those "help file-related" tests in a different set of tests.

Each test case in a single Hexawise test plan will often test the same functionality, but with different permutations. For example, those permutations might include variations of environment (IE or Firefox, using mouse or keyboard, etc.), user (new user, normal user, VIP customer, admin), data (Florida, New York, under 18, over 18) and actions (used the dropdown, keyed it in manually, clicked the confirmation checkbox). The parenthetical examples I provided here come from testing end user software, but the same applies to other types of systems.

If you're testing the same scope (same verb & noun), but identifying all the possible variations, your Auto-Script steps will be the same for all test cases. What might change, of course, is what the tester should expect to happen. Should they see an error dialog or a confirmation dialog? Should the border be green or blue? Should the user get an "X" email or a "Y" email? But we'll save that for another discussion.

By: Sean Johnson on May 7, 2014

Categories: Combinatorial Testing, Hexawise tips

not-possible

What Does it Mean?

One of the more common support inquiries we receive is when a Hexawise generated test case includes "No possible value" for a parameter. The first time you see this, it can be a bit unclear what it means and what you can do to address it.

A "no possible value" in a test case is telling you the test case is providing coverage for a needed pair in some other parameters, and in light of that needed pair your invalid and married pairs are then leaving then no value allowed for the parameter with "no possible value". That sounds confusing, but an example is much easier to understand.

An Example

Let's say we have a test plan with just 3 parameters, each with 2 values:

Fruit: Apple, Pear Car: Toyota, Dodge Dog: Collie, Mutt

And let's further suppose we have 2 invalid pairs:

if Fruit = Apple then Car cannot = Toyota if Car = Dodge then Dog cannot = Mutt

This all seems simple, but a hidden problem lurks in this simple setup. To create 2-way coverage, Hexawise will ensure you've paired every parameter value with every other parameter value (unless a constraint says it shouldn't be paired), which in this case means that Hexawise will necessarily pair Fruit as Apple with Dog as Mutt in at least one test case, since that pairing could be the source of a bug. You probably already see the problem!

In the test case that has Fruit as Apple and Dog as Mutt we need to have a value for the Car parameter. You can't have Car as Toyota, because Apple can't be paired with Toyota, and you can't have Car as Dodge, because Mutt can't be paired with Dodge. So what value can Hexawise provide for Car in this test case? It has no value to provide, so it provides "no possible value".

That's why you can get test cases with "no possible value". Sometimes you can leave them be, sometimes you might want to introduce a "N/A" value for a parameter, and sometimes your invalid and married pairs may need a bit of adjusting. Generally, given the real context of your actual test plan, it is clear which path to take to resolve them.

Dealing With "No Possible Value"

Let's use a more realistic flight reservation example, to show how the context of your real tests indicates what you should do to resolve instances of "no possible value".

Our flight reservation example test plan has these 5 parameters:

Destination: USA, China, Australia Class: First, Business, Economy Reserve a Car: Yes, No Type of Car: Luxury, Economy Payment Method: Credit Card, Frequent Flier Miles, Upgrade Coupons

Without any constraints, we're going to get test cases that pair Reserve a Car as No with Type of Car as Luxury. This makes no sense, and even if a tester were just to ignore it, it hides the fact that the nonsensical test case may be the only test case where we attempt to pay for a Luxury car with Frequent Flier Miles. If this pairing led to a bug, we'd miss the bug if we just ignored the car portion of the generated test case, so it's much better to use constraints to eliminate having Type of Car as Luxury when Reserve a Car is No.

We have options for how to constrain this, but let's suppose we create the following 2 uni-directional married pairs:

When Type of Car = Luxury then Reserve a Car = Yes When Type of Car = Economy then Reserve a Car = Yes

We've taken both our values for Type of Car and married them to Reserve a Car as Yes. Hexawise is still duty-bound to pair Reserve a Car as No with things like Destination as USA and Payment Method as Credit Card. For these test cases there won't be a valid value for Type of Car, so it will get "no possible value". This leads us to our first option for dealing with no possible values.

Option 1: Do nothing! There really is no possible value in that case, and you're OK with it.

If we use option 1, the "no possible value" is intentional. Option 1 is the simplest, but not always our best option. Not every "no possible value" is intentional, many in fact are unintentional, and are a result of inconsistent or missing constraint logic. A "no possible value" therefore tends to be a "quality smell" in a test plan. Meaning... you're not sure something is rotten when you see it, but it sure smells like something might be rotten. So that brings us to another option for handling intentional no possible values.

Option 2: Introduce a "Not Applicable" value.

We can add a third value to Type of Car.

Type of Car: Luxury, Economy, N/A

Now, whenever Reserve a Car is No there will be just 1 value left for Hexawise for Type of Car which is N/A so all those cases of "no possible value" will become "N/A". Problem fixed? Almost, but not quite! Hexawise is now duty-bound to pair Type of Car as N/A with Reserve a Car as Yes. To avoid this new nonsense, we add an invalid pair:

When Reserve a Car = Yes then Type of Car cannot = N/A

Option 1 and option 2 work when you discover the source of the "no possible value" is intentional. But what about when you discover it's not intentional. In that case, the "quality smell" has led us to something that truly is rotten!

Let's suppose we have Class as Business only on international, not domestic flights. Easy enough. Let's add an invalid pair to reflect this business rule.

When Destination = USA then Class cannot = Business

Let's also suppose that an Upgrade Coupon can only be used for Business class, not Economy or First Class. Also easy. Let's add a uni-directional married pair for this:

When Payment Method = Upgrade Coupon then Class = Business

Have you spotted the trouble yet? Hexawise is duty-bound to pair Destination as USA with Payment Method as Upgrade Coupon, since their could be a bug caused by that pairing, and if there is, you want to be sure to find it. In the test case that has this pairing, Hexawise is going to have "no possible value" for Class!

Option 3: Correct your constraint logic.

In this case, our constraint logic is incomplete. We need an invalid pair:

When Destination = USA then Payment Method cannot = Upgrade Coupon

This relieves Hexawise of its duty to pair Destination as USA with Payment Method as Upgrade Coupon and eliminates the "no possible value".

Mysterious "No Possible Value"

For cases of an unitentional "no possible value" where you need to use option 3 to correct an inconsistency in your constraint logic, it's not always clear where the inconsistency is. In fact, it can be downright mysterious. The more constrained your plan is, the harder it can be to find the root cause.

My best advice for tracking down the cause of a "no possible value" in complicated cases is to stay patient and stick with it. Practice makes you better at it. Here are some additional tips:

  • Look at each test case with "no possible values" in isolation, don't try to analyze more than 1 at once
  • Start with the test case with the fewest number of "no possible values" and then fix the issues your analysis turns up, then regenerate your tests and repeat, until you have none left
  • You need pencil and paper! Write out the test case that you're analyzing on paper, then do the analysis from the "Define Inputs" page where it's easiest to see all the possible values for each parameter
  • Take advantage of the hover highlighting of applicable value pairs in the Define Inputs screen. Collapse every section in the left panel except the Paired Values (constraints) so you can see as many as value pairings at one time as possible
  • If you get completely stuck, reach out to a more experienced colleague or ask Hexawise for help

Doing this analysis can be tricky, so having Hexawise do the analysis for you instead is a great idea. We'd like to have a tooltip with an explanation for the no possible value. Since it can be tricky for us humans to determine why a particular no possible value case exists in complicated cases, the algorithm to do the same is also tricky to get right, but we're working toward having a tooltip to explain the cause of the "no possible value", and are going to be very excited when this feature is ready. We might accidentally create a self-aware AI that enslaves the humans in the process though. If we do, don't say we didn't warn you.

In addition to an automated explanation of where the "no possible value" came from, we also know that prevention is better than a cure, so we're working to identify these cases as soon as the constraints are entered so we can prompt you in the moment to correct the issue. We already do this for some of the simpler causes of "no possible value", you may have been prompted by Hexawise already. We're working on Hexawise being able to detect more complicated causes so we can prevent more of these cases early.

In the end, always keep in mind that every "no possible value" is simply a conflict between a parameter value pairing Hexawise is duty-bound to provide to ensure you get 100% coverage, and the constraints you've applied to the plan in the form of married pairs and invalid pairs. With that in mind, have fun "no possible value" hunting.

spock-logical

By: Sean Johnson on May 1, 2014

Categories: Combinatorial Testing, Constraints, Hexawise test case generating tool, Hexawise tips

Justin posted this to the Hexawise Twitter feed

cave-question

It sparked some interesting and sometimes humorous discussion.

cave-exploration

The parallels to software testing are easy to see. In order to learn useful information we need to understand what the purpose of the visit to the cave is (or what we are seeking to learn in software testing). Normally the most insight can be gained by adjusting what you seek to learn based on what you find. As George Box said

Always remember the process of discovery is iterative. The results of each stage of investigation generating new questions to answered during the next.

Some "software testing" amounts to software checking or confirming that a specific known result is as we expect. This is important for confirming that the software continues to work as expected as we make changes to the code (due to the complexity of software unintended consequences of changes could lead to problems if we didn't check). This can, as described in the tweet, take a form such as 0:00 Step 1- Turn on flashlight with pre-determined steps all the way to 4:59 Step N- Exit But checking is only a portion of what needs to be done. Exploratory testing relies on the brain, experience and insight of a software tester to learn about the state of the software; exploratory testing seeks to go beyond what pre-defined checking can illuminate. In order to explore you need to understand the aim of the mission (what is important to learn) and you need to be flexible as you learn to adjust based on your understanding of the mission and the details you learn as you take your journey through the cave or through the software. Exploratory software testing will begin with ideas of what areas you wish to gain an understanding of, but it will provide quite a bit of flexibility for the path that learning takes. The explorer will adjust based on what they learn and may well find themselves following paths they had not thought of prior to starting their exploration.

 

Related: Maximizing Software Tester Value by Letting Them Spend More Time Thinking - Rapid Software Testing Overview - Software Testers Are Test Pilots - What is Exploratory Testing? by James Bach

By: John Hunter on Apr 8, 2014

Categories: Exploratory Testing, Scripted Software Testing, Software Testing, Testing Checklists

Some of those using Hexawise use Gherkin as their testing framework. Gherkin is based on using a given [a], when [b] --> then [c] format. The idea is this helps make communication clear and make sure business rules are understood properly. Portions of this post may be a bit confusing for new Hexawise users, links are provided for more details on various topics. But, if you don't need to create output for Gherkin and you are confused you can just skip this post.

A simple Gherkin scenario: Making an ATM withdrawal

Given a regular account
  And the account was originally opened at Goliath National
  And the account has a balance of $500 
When using a Goliath National Bank 
  And making a withdrawal of $200 
Then the withdrawal should be handled appropriately 

Hexawise users want to be able to specify the parameters (used in given and when statements) and then import the set of Hexawise generated test cases into a Gherkin style output.

In this example we will use Hexawise sample test plan (Gherkin example), which you can access in your Hexawise account.

I'll get into how to export the Hexawise created test plans so they can be used to create Gherkin data tables below (we do this ourselves at Hexawise).

In the then field we default to an expected value of "the withdrawal should be handled appropriately." This is something that may benefit from some explanation.

If we want to provide exact details on exactly what happens on every variation of parameter values for each test script those have to be manually created. That creates a great deal of work that has very little value. And it is an expensive way to manage for the long term as each of those has to be updated every time. So in general using a "behaves as expected" default value is best and then providing extra details when worthwhile.

For some people, this way of thinking can be a bit difficult to take in at first and they have to keep reminding themselves how to best use Hexawise to improve efficiency and effectiveness.

enter-default-expected-value

To enter the default expected value mouse-over the final step in the auto scripts screen. When you mouse over that step you will see the "Add Expected Results" link. Click that and add your expected result text.

expected-value-entry

The expect value entered on the last step with no conditions (the when drop down box is blank) will be the default value used for the export (and therefor the one imported into Gherkin).

In those cases when providing special notes to tester are deemed worth the extra effort, Hexawise has 2 ways of doing this. In the event a special expected value exists for the particular conditions in the individual test case then the special expected value content will be exported (and therefore used for Gherkin).

Conditional expected results can be entered using the auto scripts feature.

Or we can use the requirements feature when we want to require a specific set of parameter values to be tested. If we chose 2 way coverage (the default, pairwise coverage) every pair of parameter values will be tested at least once.

But if we wanted a specific set of say 3 exact parameter values ([account type] = VIP, [withdrawal ATM] = bank-owned ATM, [withdrawal amount] = $600 then we need to include that as a requirement. Each required test script added also includes the option to include an expected result. The sample plan includes a required test case with those parameters and an expected result of "The normal limit of $400 is raised to $600 in the special case of a VIP account using a Goliath National Bank owned ATM."

So, the most effective way to use Hexawise to create a pairwise (or higher) test plan to then use to create Gherkin data tables will be to have the then case be similar to "behaves as expected." And when there is a need for special expected results details to use the auto script or requirements features to include those details. Doing so will result the expected result entered for that special case being the value used in the Gherkin table for then.

When you click auto script button the test are then generated, you can download them using the export icon.

autoscripts-export

Then select option to download as csv file.

script-export-options

You will download a zip file that you can then unzip to get 2 folders with various files. The file you want to use for this is the combinations.txt file in the csv directory.

The Ruby code we use to convert the commas to pipes | used for Gherkin is

!/usr/bin/env ruby
require 'csv'
tests = CSV.read("combinations.csv")
table = []
tests.each do |test|
table << "| " + test[1..-1].join(" | ") + " |\n"
end
IO.write("gherkin.txt", table.join())

Of course, you can use whatever method to convert the format you wish, this is just what we use. See this explanation for a few more details on the process.

Now you have your Gherkin file to use however you wish. And as the code is changed over time (perhaps adding parameter value options, new parameters, etc.) you can just regenerate the test plan and export it. Then convert it and the updated Gherkin test plan is available.

 

Related: Create a Risk-based Testing Plan With Extra Coverage on Higher Priority Areas - Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values - Designing a Test Plan with Dependent Parameter Values

By: John Hunter on Mar 27, 2014

Categories: Hexawise test case generating tool, Hexawise tips, Scripted Software Testing, Software Testing Efficiency, Testing Strategies

Coining a New Term

I'm coining a new term today, "grapefruit juice bugs."

My inspiration for this term is a blog post in the New York Times that David Pogue wrote. I was fascinated by the post and it got me to thinking about a particular kind of bugs in software that are more common than most people may realize. You could say that these bugs are surprisingly common. In fact, if you wanted to be more precise, you could even say that this term applies to a specific type of "surprisingly common type of surprising bugs." Let me explain.

There's something about the chemical makeup of grapefruit juice that makes it interact with our biology and a large number of different drugs in ways which result in dangerous conditions. For example, certain drugs lose their effectiveness dramatically when interacting with grapefruit juice which can have life-threatening consequences. Other times, the interactions with grapefruit juice can dramatically increase a drug's potency. This can result in "safe doses" becoming very unsafe.

Grapefruit Is a Culprit in More Drug Reactions

The 42-year-old was barely responding when her husband brought her to the emergency room. Her heart rate was slowing, and her blood pressure was falling. Doctors had to insert a breathing tube, and then a pacemaker, to revive her.

They were mystified: The patient’s husband said she suffered from migraines and was taking a blood pressure drug called verapamil to help prevent the headaches. But blood tests showed she had an alarming amount of the drug in her system, five times the safe level.

Did she overdose? Was she trying to commit suicide? It was only after she recovered that doctors were able to piece the story together.

“The culprit was grapefruit juice,” said Dr. Unni Pillai, a nephrologist in St. Louis, Mo. ...

The previous week, she had been subsisting mainly on grapefruit juice. Then she took verapamil, one of dozens of drugs whose potency is dramatically increased if taken with grapefruit. In her case, the interaction was life-threatening.

Last month, Dr. David Bailey, a Canadian researcher who first described this interaction more than two decades ago, released an updated list of medications affected by grapefruit. There are now 85 such drugs on the market, he noted, including common cholesterol-lowering drugs, new anticancer agents, and some synthetic opiates and psychiatric drugs, as well as certain immunosuppressant medications taken by organ transplant patients, some AIDS medications, and some birth control pills and estrogen treatments. ... Under normal circumstances, the drugs are metabolized in the gastrointestinal tract, and relatively little is absorbed, because an enzyme in the gut called CYP3A4 deactivates them. But grapefruit contains natural chemicals called furanocoumarins, that inhibit the enzyme, and without it the gut absorbs much more of a drug and blood levels rise dramatically.

For example, someone taking simvastatin (brand name Zocor) who also drinks a small 200-milliliter, or 6.7 ounces, glass of grapefruit juice once a day for three days could see blood levels of the drug triple, increasing the risk for rhabdomyolysis, a breakdown of muscle that can cause kidney damage.

 

So what do interactions between grapefruit juice and drugs have to do with software testing?

Like grapefruit juice's impact on prescription drugs, software testing involves critical interactions between different parts of the system. And risks exist when these different parts interact with one another. This is true whether you're talking about "large parts" interact in System Testing or "small parts" interact in Unit Testing.

Interactions between things are a very rich source of bugs in software. As anyone who has heard the infernal phrase "works on my machine" can tell you, software features and functions often work perfectly fine in many usage scenarios, hardware and software configurations , etc. - only to fail to work in ever-so-slightly different situations.

 

The difference between plain old every-day "Dual-Mode Faults" and "Grapefruit Juice Bugs"

A dual-mode fault occurs whenever two test inputs must both be present to trigger a defect. Most software testers start encountering them quite frequently within days of starting their jobs. Some examples:

  • This "buy" button works fine. Except when the customer is a "new user." (First, action = "click on the buy button" and Second, customer = "new user")

  • Transaction prices for share purchases are calculated correctly. Except when denominated in Japanese Yen. (First, Action = "sell shares" and Second, Currency = "Japanese Yen")

Like grapefruit juice's impact on prescription drugs, software testing involves critical interactions between different parts of the system. And risks exist when these different parts interact with one another. This is true whether you're talking about "large parts" interact in System Testing or "small parts" interact in Unit Testing.

While all grapefruit juice bugs are dual-mode faults, not all dual-mode faults are Grapefruit Juice Bugs:

  • Grapefruit juice bugs have got to have a little of the element of surprise in them. When you explain them to a developer, their first reaction should be "Huh? How is that even possible?" or at least "Hmmm... That's odd. Let me investigate."

  • Anything along the lines of "This feature usually works, except in IE6, when..." is almost definitely not a grapefruit juice bug. Problematic interactions with IE6 are an incredibly common type of dual-mode fault, not a surprising one.

Whenever you hear "works on my machine" replies to your bug reports, and it takes a while for the issue to be replicated, odds are pretty good that a grapefruit juice bug might be involved.

Here's an example of an especially surprising grapefruit juice bug. This excerpt from Apple's online help files that the company posted after users of the original iPad complained about problems with Wi-Fi connectivity. Certain screen brightness settings were causing problems with the Wi-Fi signals. I'm not even to begin to guess how one would have anything to do with the other.

Auto-Scripting-Exercises-at-1.30.13-PM1

How to identify grapefruit juice bugs during your testing?

What is a tester to do when faced with more possible potential grapefruit juice bugs than he can handle using traditional methods?

If you're a software tester trying to do your best to determine whether a feature or function in your System Under Test will work "on everyone's machine," you've got a nightmare on your hands . Really nasty combinatorial explosions arise when you consider all of the possible combinations that would be required to test multiple hardware options, multiple software options, multiple usage scenarios, multiple test data inputs (and multiple combinations of the test data itself), multiple ways in which users enter data, and all of the rest of the "stuff that could vary" when people use your application. If you take the time to think expansively about the possible variations in a medium-sized applications, Quadrillions of possible tests often result.

While not eating grapefruit and not drinking grapefruit juice might be wise if you are taking drugs, there is rarely, if ever, such an easy method for eliminating the possibility of negative results due to software interactions. Refusing to support IE 6 in order to avoid the disproportionate number of grapefruit juice-like problematic interactions associated with IE6 would be as close as you could come in the world of software.

Design of Experiments-based test design methods can help testers come to grips with this challenge. Orthogonal array software testing (often referred to as OATS or simply OA testing) is a test design strategy that allows us to efficiently detect bugs created by interactions within the system. Orthogonal array software testing is based on the principles of multifactor designed experiments as first explored by Sir RA Fisher.

Design of Experiments-based test design methods are very-closely related to pairwise testing (AKA allpairs testing, all pairs testing, and pairwise-testing). Any of these test design strategies will allow a software tester to quickly generate a set of tests that includes tests for every single pair of test inputs.

This approach to test design often has multiple advantages, including faster test creation, more varied test scenarios, 100% coverage of all potential dual-mode faults (including hard-to-predict grape-fruit juice bugs), and often a smaller resulting set of tests that will be quicker to execute. Having said that, it is by no means a magical silver bullet. This approach to test design requires test designers with above average analytical abilities to identify the appropriate Parameters and Values for their system under test; this is sometimes easier said than done because it requires a new mindset from test designers.

Software testers can take solace that the challenges of software testing, while significant, are simple when compared to trying to understand the effects of drug interactions in people.

Combinatorial testing can look at bugs created by the interaction between multiple (3, 4, 5, 6...) variables. So if there was a bug that didn't get triggered just by using Chrome on Windows but it would get triggered if you also tried to replace an existing photo in your profile with a new profile photo into your profile (test idea number 3), then pairwise testing might not catch it. Pairwise test design would create a set of tests that would include at least one test for each of these pairs:

  • Chrome & Windows and

  • Chrome & replace photo and

  • Windows & replace photo, but...

A set of pairwise might not fail to test for the specific combination of all three of those test inputs in the same test. With the use of combinatorial test design approaches, you could create test plans with 100% coverage for 3 way interactions and be sure that all 3-way interactions or 4-way interactions are covered. When you create sets of 3-way tests, 4-way tests, 5-way tests, and 6-way tests though, you'll quickly discover that the number of tests required starts to balloon.

Hexawise allows you to create test plans with the coverage interactions you desire. This allows you to create sets of tests from 2-way up all the way up to phenomenally-thorough 6-way sets of tests. In fact, it even lets you generate clever sets of risk-based tests that will, say, prioritize comprehensive 4-way coverage on 4 sets of Parameter Values while ensuring only pairwise coverage of the other, lower-priority, interactions in your system under tests. Hexawise also lets you create mixed strength test plans so if you have certain factors that you are very concerned about and want to provide coverage for more possible interactions you can set the interaction levels for those at a higher level.

 

Related: Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values - Maximize Test Coverage Efficiency And Minimize the Number of Tests Needed - How to Model and Test CRUD Functionality - 25 Great Quotes for Software Testers

By: Justin Hunter on Feb 11, 2014

Categories: Bugs, Combinatorial Testing, Design of Experiments, Multi-variate Testing, Multi-variate Testing, Pairwise Software Testing, Software Testing, Testing Strategies

Hexawise has had another great year and we owe it to you, our users. Thank you! As a result of your input and word-of-mouth recommendations, in 2013, the Hexawise test design tool:

  • Added powerful new features,

  • Became even easier to use,

  • Introduced lots of new practical instructional content, and

  • Doubled usage again.

If you haven’t visited Hexawise lately, please login now to see all the improvements we've made (register for free).

Ease-of-Use Enhancements

Instructional Guides for Hexawise Features
We’ve added illustrated step-by-step instructions describing how to use Hexawise features.

Find them at help.hexawise.com. For our advanced features, like creating risk-based test plans, auto-generating complete scripts, and using Value Expansions, we’ve gone beyond “how?” to explain “why?” you would want to use these features.

Practical Test Design Tips
Want to see tips and tricks for creating unusually powerful tests? Want to learn about common mistakes (and how to avoid them)? Want to understand whether pairwise test design really works in practice? These topics and more can now be found at training.hexawise.com.

Frog-Powered Self-Paced Learning

frogs next

Want to become a Hexawise guru? Listen to the frogs. If you complete the achievements recommended by your friendly amphibian guides, you will level up from a Novice to Practitioner to Expert to Guru.

frogs become expert

You’ll complete two kinds of achievements on your way to guru-ness. To complete some achievements you only need to use certain Hexawise features. Completing the other achievements requires learning important test design concepts and demonstrating that you understand them. The frogs, ever-ready to guide you towards test design mastery, will greet you immediately upon logging into your account.

Powerful New Features

Recently-added features that will make you a more powerful and speedy test designer, include:

Coverage of Specific High-Priority Scenarios
You can now force specific scenarios to appear in the tests you generate using the Requirements feature.

Requirements Traceability
Requirements traceability is easier to manage than ever with the Requirements feature.

Generation of Detailed Test Scripts
The Auto-Scripting feature allows you to automatically transform sets of optimized test conditions into test scripts that contain detailed instructions in complete sentences.

Auto-Population of Expected Results in Test Scripts
If you want to, you can even automatically generate rules-based Expected Results to appear as part of your test steps by using the Expected Results feature.

To find out more about these features Hexawise added in 2013, please check out these cool slides: "Powerful New Hexawise Features".

 

Public Recognition and Rapid Growth

Kind Words
As a five-year old company working as hard as we can to make Hexawise the best damn tool it can be, hearing input from you keeps us motivated and headed in the right direction. Once a week or so, we hear users say nice things about our tool. Here are some of the nice things you guys said about Hexawise this past year:

 

“Working coaching session with customer today. Huge data/config matrix was making them weep. Stunned silence when I showed them @Hexawise :)”

-Jim Holmes (@aJimHolmes)

 

“That would be @Hexawise & combinatorial testing to the rescue once again. #Thanks”

-Vernon Richards (@TesterFromLeic)

 

Freaking awesome visualisation of test data coverage. Kind courtesy of @Hexawise at Moolya!”

-Moolya Testing (@moolyatesting)

 

“Using @Hexawise combinatorial scenarios for e-commerce basket conditions. Team suitably impressed by speed and breadth of analysis. #Win”

-Simon Knight (@sjpknight)

 

“Just discovered Hexawise today, brilliant tool for creating test cases based on coverage of many variables.”

-Stephen Blower (@badbud65)

 

“This changes everything.”

-Dan Caseley (@Fishbowler)

 

Using Hexawise is one of the highest ROI software development practices in the world.

-Results, paraphrased, of independent study by industry expert Capers Jones and colleagues.

 

Rapid Growth
Throughout 2013, Hexawise continued to be piloted and adopted at new companies week after week. Hexawise is currently being used to generate tests at:

More than 100 Fortune 500 firms More than 2,000 smaller firms

Hexawise office

New offices
Having moved into our new offices in October, the Hexawise team now gets together to do all our best stuff at this swanky new location in Chapel Hill, North Carolina.

What's Next?

Constant Improvements
We keep a public running tally of all of our enhancements. As you can see, we’re making improvements at a rate of more than once a week.

Want us to Add a New Feature?
If you have an additional feature, please let us know. We listen. There’s not much that makes us happier than finding ways to improve Hexawise.

Please Tell Your Friends
Our growth has been almost purely due to word-of-mouth recommendations from users like you. So if you find Hexawise to be helpful, please help spread the word. We appreciate it more than you know.

You can even let your friends in on a bit of a testing industry secret: while company-wide Hexawise licenses start at $50,000 per year, we allow the first five users to use fully-featured Hexawise accounts at no cost!

Thank you for all of your help and support this past year. And have a great 2014!

By: Justin Hunter on Jan 28, 2014

Categories: Hexawise test case generating tool, Recommended Tool

A common mistake software companies make is creating products where features built for advanced users overwhelm and confuse average users. At Hexawise we have focused on creating a great experience for average and new users while providing advanced users powerful options.

How to Avoid a Common Product Mistake Many Teams Make by Mark Suster

The single biggest mistake most product teams make is building technology for what they believe the user would want rather than what the actual end-user needs. … My philosophy is simply. You design your product for the non-technologist. The “Normal.”

Give people fewer options, fewer choices to make. Give them the bare essentials. Design products for your mom. … power users will always find the features they need in your product even if they’re hidden away. Give them a configure button somewhere. From there give them options to soup up the product and add the complexity that goes with newfound power.

Make sure you read his full post and subscribe to his blog, the posts are clearly-written, pragmatic, and insightful.

Our experiences at Hexawise match the points the post makes. We've designed our web-based tool with Jason Fried/37 Signals-inspired "KISS" (Keep It Simple Stupid) design principles in mind. Our interesting debates about how to add (or whether to add) new features have often been based on the exact tensions you mention here. "Power users want new features" vs. "... but users love our tool precisely because it's easy-to-use and it doesn't have 'feature bloat'."

 

We've experimented with the suggestion you raise here (e.g., rather than say "no" to every advanced user request, we build them in hidden places for advanced users without distracting our regular users). Results have been good so far.

The Bulk add/bulk edit feature in Hexawise is an example of a powerful feature that is implemented in a way that doesn't interfere with the ease of use for those that don't take advantage of this feature.

For us, there are few things more important to our tool's success in the marketplace than our ability to get the balance right between "uncluttered simplicity that everyone wants" vs. "with the powerful capabilities that advanced users want."

There are natural tensions between the two opposing design goals. Sean Johnson (Hexawise CTO) is the strongest advocate for simplicity. John and Justin, Hexawise's CEO, love simplicity, and understand the important of simplicity for usability, but also find ourselves pushing for advanced features.

I am a strong believer in the simplicity with the option for power users to access advanced features. Turning this concept into practice isn't easy. Thankfully Sean has the mindset and skill to make sure we don't sacrifice simplicity and usability while providing power features to power users at Hexawise. I was also lucky enough to have another amazing co-worker, Elliot Laster, at a previous employer that provided this valuable role. One of the tricks to making this work is hiring a great person with the ability to make it a reality (it requires deep technical understanding, an understanding of usability principles and a deep connection to real users, all of which Sean and Elliot have to an amazing degree).

Getting the balance right is one of the most important things a software company can do. We've tried really hard to get it right. We're probably overdue for formal usability testing of basic functionality. Reading blogs like Mark's is useful for new ideas and also for the push to do what you know you should do but seem to keep putting off.

 

By: John Hunter and Justin Hunter on Jan 21, 2014

Categories: Hexawise test case generating tool, Software Development

The process used to hire employees is inefficient in general and even more inefficient for knowledge work. Justin Hunter, Hexawise CEO, posted the following tweet:

The labor market is highly inefficient for software testers. Many excellent testers are undervalued relative to average testers. Agree?

 

The tweet sparked quite a few responses:

inefficient-job-market-tweet

I think there are several reasons for why the job market is inefficient in general, and for why it is even more inefficient for software testing than for most jobs.

 

  • Often, how companies go about hiring people is less about finding the best people for the organization and more about following a process that the organization has created. Without intending to, people can become more more concerned about following procedural rules than in finding the best people.

  • The hiring process is often created much like software checking, a bunch of simple things to check - not because doing so is actually useful but because simple procedural checks are easy to verify. So organizations require a college degree (and maybe even require a specific major). And they will use keywords to select or reject applicants. Or require certification or experience with a specific tool. Often the checklist used to disqualify people contains items that might be useful but shouldn't be used as barriers but it is really easy for people that don't understand the work to apply the rules in the checklist to filter the list of applicants.

  • It is very hard to hire software testers well when those doing the hiring don't understand the role software testing should play. Most organizations don't understand, so they hire for software checkers. They, of course, don't value people that could provide much more value (software testers that go far beyond checking). The weakness of hiring without understanding the work is common for knowledge work positions and likely even more problematic for software testing due to the even worse understanding of what they should be doing compared to most knowledge workers.

 

And there are plenty more reasons for the inefficient market.

Here are few ideas that can help improve the process:

  • Spend time to understand and document what your organization seeks to gain from new hires.

  • Deemphasize HR's role in the talent evaluation process and eliminate dysfunctional processes that HR may have instituted. Talent evaluation should be done by people that understand the work that needs to get done. HR can be useful in providing guidance on legal and company-decided policies for hiring. Don't have people that can't evaluate the difference between great testers and good testers decide who should be hired or what salary is acceptable. Incidentally, years of experience, certifications, degrees, past salary and most anything else HR departments routinely use are often not correlated to the value a potential employee brings.

  • A wonderful idea, though a bit of a challenge in most organizations, is to use job auditions. Have the people actually do the job to figure out if they can do what you need or not (work on a project for a couple weeks, for example). This has become more common in the last 10 years but is still rare.

  • I also believe you are better off hiring for somewhat loose job descriptions, if possible, and then adjusting the job to who you hire. That way you can maximize the benefit to the organization based on the people you have. At Hexawise, for example, most of the people we hire have strengths in more than one "job description" area. Developers with strong UI skills, for instance, are encouraged to make regular contributions in both areas.

  • Creating a rewarding work environment helps (this is a long term process). One of the challenges in getting great people is they are often not interested in working for dysfunctional organizations. If you build up a strong testing reputation great testers will seek out opportunities to work for you and when you approach great testers they will be more likely to listen. This also reduces turnover and while that may not seem to relate to the hiring process is does (one reason we hire so poorly is we don't have time to do it right, which is partly because we have to do so much of it).

  • Having employees participate in user groups and attending conferences can help your organization network in the testing community. And this can help when you need to hire. But if your organization isn't a great one for testers to work in, they may well leave for more attractive organizations. The "solution" to this risk is not to stunt the development of your staff, but to improve the work environment so testers want to work for your organization.

 

Great quote from Dee Hock, founder of Visa:

Hire and promote first on the basis of integrity; second, motivation; third, capacity; fourth, understanding; fifth, knowledge; and last and least, experience. Without integrity, motivation is dangerous; without motivation, capacity is impotent; without capacity, understanding is limited; without understanding, knowledge is meaningless; without knowledge, experience is blind. Experience is easy to provide and quickly put to good use by people with all the other qualities.

Please share your thoughts and suggestions on how to improve the hiring process.

 

Related: Finding, and Keeping, Good IT People - Improving the Recruitment Process - Six Tips for Your Software Testing Career - Understanding How to Manage Geeks - People: Team Members or Costs - Scores of Workers at Amazon are Deputized to Vet Job Candidates and Ensure Cultural Fit

By: John Hunter on Jan 14, 2014

Categories: Checklists, Software Testing, Career

The Hexawise Software Testing blog carnival focuses on sharing interesting and useful blog posts related to software testing.

 

  • Using mind-mapping software as a visual test management tool by Aaron Hodder - "I want to be able to give and receive as much information as I can in the limited amount of time I have and communicate it in a way that is respectful of others' time and resources. These are my values and what I think constitutes responsible testing."

  • Healthcare.gov and the Tyranny of the Innocents by James Bach - "Management created the conditions whereby this project was 'delivered' in a non-working state. Not like the Dreamliner. The 787 had some serious glitches, and Boeing needs to shape that up. What I’m talking about is boarding an aircraft for a long trip only to be told by the captain 'Well, folks it looks like we will be stuck here at the gate for a little while. Maintenance needs to install our wings and engines. I don’t know much about aircraft building, but I promise we will be flying by November 30th. Have some pretzels while you wait.'"

 

jungle-bridge

Rope bridge in the jungle by Justin Hunter

 

  • Software Testers Are Test Pilots by John Hunter - "Software testers should be test pilots. Too many people think software testing is the pre-flight checklist an airline pilot uses."

  • Where to begin? by Katrina Clokie - "Then you need to grab your Product Owner and anyone else with an interest in testing (perhaps architect, project manager or business analyst, dependent on your team). I'm not sure what your environment is like, usually I'd book an hour meeting to do this, print out my mind map on an A3 page and take it in to a meeting room with sticky notes and pens. First tackle anything that you've left a question mark next to, so that you've fleshed out the entire model, then get them to prioritise their top 5 things that they want you to test based on everything that you could do."

  • Being a Software Tester in Scrum by Dave McNulla - "Pairing on development and testing strengthens both team members. With people crossing disciplines, they improve understanding of the product, the code, and what other stakeholders find important."

  • Stop Writing Code You Can’t Yet Test by Dennis Stevens - "The goal is not to write code faster. The goal is to produce valuable, working, testing, remediated code faster. The most expensive thing developers can do is write code that doesn’t produce something needed by anyone (product, learning, etc). The second most expensive thing developers can do is write code that can’t be tested right away."

  • Is Healthcare.gov security now fixed? by Ben Simo - "I am very happy that the most egregious issue was immediately fixed. Others issues remain. The vulnerabilities I've listed above are defects that should not make it to production. It doesn't take a security expert or “super hacker” to exploit these vulnerabilities. This is basic web security. Most of these are the kinds of issues that competent web developers try to avoid; and in the rare case that they are created, are usually found by competent testers."

  • Embracing Chaos Testing Helps Create Near-Perfect Clouds - "Chaos Monkey works on the simple premise that if we need to design for high availability, we should design for failure. To design for failure, there should be ways to simulate failures as they would happen in real-world situations. This is exactly what a Chaos Monkey helps achieve in a cloud setup.
    Netflix recently made the source code of Chaos Monkey (and other Simian Army services) open source and announced that more such monkeys will be made available to the community."

  • Bugs in UK Post Office System had Dire Consequences - "A vocal minority of sub-postmasters have claimed for years that they were wrongly accused of theft after their Post Office computers apparently notified them of shortages that sometimes amounted to tens of thousands of pounds. They were forced to pay in the missing amounts themselves, lost their contracts and in some cases went to jail. Second Sight said the Post Office's initial investigation failed at first to identify the root cause of the problems. The report says more help should have been given to sub-postmasters, who had no way of defending themselves."

  • Traceability Matrix: Myth and Tricks by Adam Howard - "And this is where we get to the crux of the problem with traceability matrices. They are too simplistic a representation of an impossibly complex thing. They reduce testing to a series of one to one relationships between intangible ideas. They allow you to place a number against testing. A percentage complete figure. What they do not do is convey the story of the testing."

  • Six Tips for Your Software Testing Career by John Hunter - "Read what software testing experts have written. It’s surprising how few software testers have read books and articles about software testing.Here are some authors (of books, articles and blogs) that I've found particularly useful..."

By: John Hunter on Dec 16, 2013

Categories: Software Testing