Software testing concepts help us compartmentalize the complexity that we face in testing software. Breaking the testing domain into various areas (such as usability testing, performance testing, functional testing, etc.) helps us organize and focus our efforts.

But those concepts are constructs that often have fuzzy boundries. What matters isn't where we should place certain software testing efforts. What matters is helping create software that users find worthwhile and hopeful enjoyable.

One of the frustration I have faced in using internet based software in the last few years is that it often seems to be tested without considering that some users will not have fiber connections (and might have high latency connections). I am not certain latency (combined maybe with lower bandwidth) is the issue but I have often found websites either actually physically unusable or mentally unusable (it is way too frustrating to use).

It might be the user experience I face (on the poorly performing sites) is as bad for all users, but my guess is it is a decent user experience on the fiber connections that the managers have when they decide this is an OK solution. It is a usbility issue but it is also a performance issue in my opinion.

It is certainly possible to test performance results on powerful laptops with great internet connections and get good performance results for web applications that will provide bad performance results on smart phones via wifi or less than ideal cell connections. This failure to understand the real user conditions is a significant problem and an area of testing that should be improved.

I consider this an interaction between performance testing and user-experience testing (I use "user-experience" to distinguish it from "usability testing", since I can test aspects of the user experience without users testing the software). The page may load in under 1 second on a laptop with a fiber connection but that isn't the only measure of performance. What about your users that are connecting via a wifi connection with high latency? What if the performance in that case is that it takes 8 seconds to load and your various interactive features either barely work or won't work at all given the high latency.

In some cases ignoring the performance for some users may be OK. But if you care about a system that delivers fast load times to users you need to consider the performance not just for a subset of users but consider how it performs for users overall. The extent you will prioritize various use cases will depend on your specific situation.

I have a large bias for keeping the basic experience very good for all users. If I add fancy features that are useful I do not like to accept meaningful degradation to any user's experience - graceful degradation is very important to me. That is less important to many of sites that I use, unfortunately. What priority you place on it is a decision that impacts your software development and software testing process.

Hexawise attempts to add features that are useful while at the same time paying close attention to making sure we don't make things worse for users that don't care about the new feature. Making sure the interface remains clear and easy to use is very important to us. It is also a challenge when you have powerful and fairly complex software to keep the usability high. It is very easy to slip and degrade the users experience. Sean Johnson does a great job making sure we avoid doing that.

Maintaining the responsiveness of Hexawise is a huge effort on our part given the heavy computation required in generating tests in large test case scenarios.

You also have to realize where you cannot be all things to all people. Using Hexawise on a smart phone is just not going to be a great experience. Hexawise is just not suited to that use case at all and therefore we wouldn't test such a use case.

For important performance characteristics it may well be that you should create a separate Hexawise test plan to test the performance under server different conditions (relating to latency, bandwidth and perhaps phone operating system). It could be done within a test plan it just seems to me more likely separate test plans would be more effective most of the time. It may well be that you have the primary test plan to cover many functional aspects and have a much smaller test plan just to check that several things work fine in a high latency and smart phone use case).

Within that plan you may well want to test out various parameter values for certain parameters operating system iOS Android 7 Android 6 Android 5

latency ...

Of course, what should be tested depends on the software being tested. If none of the items above matter in your case they shouldn't be used. If you are concerned about a large user base you may well be concerned about performance on various Android versions since the upgrade cycle to new versions is so slow (while most iOS users are on the latest version fairly quickly).

If latency has a big impact on performance then including a parameter on latency would be worthwhile and testing various parameter values for it could be sensible (maybe high, medium and low). And the same with testing various levels of bandwidth (again, depending on your situation).

My view is always very user focused so the way I naturally think is relating pretty much everything I do to how it impacts the end user's experience.

Related: 10 Questions to Ask When Designing Software Tests - Don’t Do What Your Users Say - Software Testers Are Test Pilots

By: John Hunter on Jul 26, 2016

Categories: User Experience, Testing Strategies, Software Testing

Usability testing is the practice of having actual users try the software. Outcomes include the data of the tasks given to the user to complete (successful completion, time to complete, etc.), comments the users make and expert evaluation of their use of the software (noticing for example that none of the users follow the intended path to complete the task, or that many users looked for a different way to complete a task but failing to find it eventually found a way to succeed).

Usability testing involves systemic evaluation of real people using the software. This can be done in a testing lab where an expert can watch the user but this is expensive. Remote monitoring (watching the screen of the user; communication via voice by the user and expert; and viewing a webcam showing the user) is also commonly used.

In these setting the user will be given specific tasks to complete and the testing expert will watch what the user does. The expert will also ask the user questions about what they found difficult and confusing (in addition to what they liked) about the software.

The concept of usability testing is to have feedback from real users. In the event you can't test with the real users of a system it is important to consider if you are fairly accuratately representing that population with your usability testers. If the users of the system of fairly unsophisticated users if you use usability testers that are very computer savy they may well not provide good feedback (as their use of the software may be very different from the actually users).

"Usability testing" does not encompass experts evaluating the software based on known usability best practices and common problems. This form of expert knowledge of wise usability practices is important but it is not considered part of "usability testing."

Find more exploration of software testing terms in the Hexawise software testing glossary.

Related: Usability Testing Demystified - Why You Only Need to Test with 5 Users (this is not a complete answer, it does provide insite into the value of quick testing to run during the development of the software) - Streamlining Usability Testing by Avoiding the Lab - Quick and Dirty Remote User Testing - 4 ways to combat usability testing avoidance

By: John Hunter on Jul 4, 2016

Categories: User Experience, User Interface, Testing Strategies, Software Testing

Recently we added the revisions feature as an enhancement to Hexawise created test plans. This allows you to easily revert to a previous test plan for whatever reason you wish. We provide a list of each revision and the date and all you have to do is click a button to revert to that version. We also give you the option to copy that plan (in case you want to view that plan, but ​also​ want to ​keep all​ the updates you have made since then).

Now when you are editing a test plan in Hexawise you will see a revisions link on the top of the screen (see image):

revisions-button

Note, revisions are available in editable test plans, the revisions link is not available in uneditable test plans (such as the sample plans). I saved an editable copy of the plan into my private plans.

In the example in this post, I am using the Hexawise sample plan "F) Flight Reservations" which you can view in your own account.

revisions-note

One of the notes in this sample plan says we should also test for whether java is enabled or not. So I added a new paramater for java and included enabled and not-enabled as paramater values.

At a later date, if we wanted to go to a previous version all we have to do is click the revisions link (see top image) and then we get a list of all the revisions:

revisions-action

Mouseover the revision we want to use and we can make a copy of that version or we can revert to the version we desire.

Versions make is very easy to get back to a previous version of your test plan. This has been quite a popular feature addition. I hope you enjoy it. We are constantly seeking to improve based on feedback from our users. If you have comments and suggestions please share them with use.

Related: My plan has a lot of constraints in it. Should I split it into separate plans? - Whaddya Mean "No Possible Value"? - Customer Delight

By: John Hunter on May 31, 2016

Categories: Hexawise, Hexawise test case generating tool, Hexawise tips

At Hexawise we aim to improve the way software is tested. Achieving that aim requires not only providing our clients with a wonderful software tool (which our customers say we’re succeeding at) but also a commitment from the users of our tool to adopt new ways of thinking about software testing.

We have written previously about our focus on the importance of the values Bill Hunter (our founder's father) to Hexawise. That has led us to constantly focus on how maximize the benefits our customers gain using Hexawise. This focus has led us to realize that our customers that take advantage of the high-touch training services and ongoing expert test design support on demand that we offer often realize unusually large benefits and roll out usage of Hexawise more quickly and broadly than our customers who acquire licenses to Hexawise and try to “get the tool and make it available to the team.”

We are now looking for someone to take on the challenge of helping our clients succeed. The principles behind our decision to put so much focus on helping our customers succeed are obvious to those that understand the thinking of Bill Hunter, W. Edwards Deming, Russel Ackoff etc. but they may seem a bit odd to others. The focus of this senior-level position really is to help our customers improve their software testing results. It isn't just a happy sounding title that has no bearing on what the job actually entails.

The person holding this position will report to the CEO and work with other executives at Hexawise who all share a commitment to delighting our customers and improving the practice of software testing.

Hexawise is an innovative SaaS firm focused on helping large companies use smarter approaches to test their enterprise software systems. Teams using Hexawise get to market faster with higher quality products. We are the world’s leading firm in our niche market and have a growing client base of highly satisfied customers. Since we launched in 2009, we have grown both revenues and profits every year. Hexawise is changing the way that large companies test software. More than 100 Fortune 500 companies and hundreds of other smaller firms use our industry leading software.

Join our journey to transform how companies test their software systems.

Hexawise office

Description: VP of Customer Success

In the Weeks Prior to a Sale Closing

  • Partner with sales representatives to conduct virtual technical presentations and demonstrations of our Hexawise test design solution.

  • Clearly explain the benefits and limitations of combinatorial test design to potential customers using language and concepts relevant to their context by drawing upon your own “been there, done that” experiences of having successfully introduced combinatorial test design methods in multiple similar situations.

  • Identify and assess business and technical requirements, and position Hexawise solutions accordingly.

Immediately Upon a New Sale Closing

  • Assess a new client’s existing testing-related processes, tools, and methods (as well as their organizational structure) in order to provide the client with customized, actionable recommendations about how they can best incorporate Hexawise.

  • Collaborate with client stakeholders to proactively identify potential barriers to successful adoption and put plans in place to mitigate / overcome such barriers.

  • Provide remote, instructor-led training sessions via webinars.

  • Provide multi-day onsite instructor-led training sessions that: cover basic software test design concepts (such as Equivalence Class Partitioning, the definition of Pairwise-Testing coverage, etc.) as well as how to use the specific features of Hexawise.

  • Include industry-specific and customer-specific customized training modules and hands-on test design exercises to help make the sessions relevant to the testers and BA’s who attend the training sessions.

  • Collaborate with new users and help them iterate, improve, and finalize their first few sets of Hexawise-generated software tests.

  • Set rollout and adoption success criteria with clients and put plans in place to help them achieve their goals.

Months After a New Sale Closing

  • Continue to engage with customers onsite and virtually to understand their needs, answer their test design questions, and help them achieve large benefits from test optimization.

  • Monitor usage statistics of Hexawise clients and proactively reach out to clients, as appropriate, to provide proactive assistance at the first sign that they might be facing any potential adoption/rollout challenges.

  • Collaborate with stakeholders and end users at our clients to identify opportunities to improve the features and capabilities of Hexawise and then collaborate with our development team to share that feedback and implement improvements.

Required Skills and Experience

We are looking for a highly-experienced combinatorial test design expert with outstanding analytical and communication skills to provide these high touch on-boarding services and partner with our sales team with prospective clients.

Education and Experience

  • Bachelor’s or technical university degree.

  • Deep experience successfully introducing combinatorial test design methods on multiple different kinds of projects to several different groups of testers.

  • Set rollout and adoption success criteria with multiple teams and put plans in place to achieve them.

  • Minimum 5 years in software testing, preferably at a IT consulting firm or large financial services firm.

Knowledge and Skills

  • Ability to present and demonstrate capabilities of the Hexawise tool, and the additional services we provides to our clients beyond our tool.
  • Exhibit excellent communication and presentation skills, including questioning techniques.
  • Demonstrate passion regarding consulting with customers.
  • Understand how IT and enterprise software is used to address the business and technical needs of customers.
  • Demonstrate hands-on level skills with relevant and/or related software technology domains.
  • Communicate the value of products and solutions in terms of financial return and impact on customer business goals.
  • Possess a solid level of industry acumen; keeping current with software testing trends and able to converse with customers at a detailed level on pertinent issues and challenges.
  • Represents Hexawise knowledgeably, based on a solid understanding of Hexawise’s business direction, portfolio and capabilities
  • Understand the competitive landscape for Hexawise and position Hexawise effectively.
  • A cover letter that describes who you are, what you've done, and why you want to join Hexawise.
  • Ability to work and learn independently and as part of a team
  • Desire to work in a fast-paced, challenging start-up environment

Why join Hexawise?

salary + bonus; medical and dental, 401(k) plans; free parking and very slick Chapel Hill office! Opportunity to experience work with a fast-growing, innovative technology company that is changing the way software is tested.

Key Benefits:

Salary: Negotiable, but minimum of $100,000 + Commissions based upon client license renewals Benefits: Health, dental included, 401k plan Travel: Average of no more than 2-3 days onsite per week Location: Chapel Hill, NC*

*Working from our offices would be highly preferable. We might consider remote working arrangements for an exceptional candidate based in the US.

Apply for the VP of Customer Success position at Hexawise.

By: John Hunter on May 12, 2016

Categories: Hexawise, Career, Software Testing, Lean, Customer Success, Agile

It has been quite a long time since we last posted a roundup of great content on software testing from around the web.

  • Mistakes We Make in Testing by Michael Sowers - "Not being involved in development at the beginning and reviewing and providing feedback on the artifacts, such as user stories, requirements, and so forth. Not collaborating and developing a positive relationship with the development team members..."
  • Changing the conversation about change in testing by Katrina Clokie - "I'm planning to stop talking about bugs, time and money. Instead I'm going to start framing my reasoning in terms of corporate image, increasing quality awareness among all disciplines, and improving end-user satisfaction."
  • How to Pack More Coverage Into Fewer Software Tests by Justin Hunter - "There are simply too many interactions for a tester to keep track of on their own. As a result, manually-selected test sets almost always fail to test for a rather large number of potentially important interactions."
  • Building Quality by Alan Page - "your best shot at creating a product that customers like crave is to get quantitative and qualitative feedback directly from your users... Get it in their hands, listen to them, and make it better."
  • Dr. StrangeCareer or: How I Learned to Stop Worrying and Love the Software Testing Industry by Keith Klain - "Testing is hard. Doing it right is very hard. An ambiguous, unchartered journey into a sea of bias and experimentation, but as the line from the movie goes, 'the hard is what makes it great'."
  • Exploratory Testing 3.0 by James Bach and Michael Bolton - "Because we now define all testing as exploratory. Our definition of testing is now this: 'Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modeling, observation and inference, output checking, etc.'"
  • Coverage Is Not Strongly Correlated With Test Suite Effectiveness by Laura Inozemtseva and Reid Holmes - "Our results suggest that coverage, while useful for identifying under-tested parts of a program, should not be used as a quality target because it is not a good indicator of test suite effectiveness. "
  • How Software Testers Can Teach, Share and Play with Others by Justin Rohrman - Software testers "bring a varied skill set to software development organizations — math, experiment design, modeling, and critical thinking."
  • Who Killed My Battery: Analyzing Mobile Browser Energy Consumption - " dynamic Javascript requests can greatly increase the cost of rendering the page... by modifying scripts on the Wikipedia mobile site we reduced by 30% the energy needed to download and render Wikipedia pages with no change to the user experience."

By: John Hunter on Apr 27, 2016

Categories: Roundup, Software Testing

When you outsource testing responsibilities to a third party, there are several key points to consider beyond the three most frequently asked questions of:

  • What is the cost per hour of your services?
  • What evidence can you provide of your firm’s level of expertise in our industry and/or the technical skills we feel you’ll need? and
  • What is your plan for increasing testing automation? In particular, it is worth developing a thorough understanding how the vendor will identify what kinds of things should be tested, how those things should be tested, and how that information will be communicated to you.
  1. How do you ensure testing is performed thoroughly - beyond just “happy path”/ confirmatory tests?
  2. Do you use Exploratory Testing broadly? Would you recommend we use it? Exploratory Testing is a well-established testing approach.
    • Champions of exploratory testing include James Bach and Michael Bolton.
    • If your vendor hasn’t heard of exploratory testing and starts to explain the disadvantages of undisciplined “ad hoc testing” it’s a warning sign that they might have rigid tunnel vision about what good testing entails. Ad hoc testing is not what exploratory testing is but those that haven't learned about exploratory testing may get the two ideas confused.
  3. How do you communicate the diminishing marginal returns that exist when you execute sets of test scripts?
    • In Hexawise, analyze tests screen is how we help show this concept. That screen shows both pecentage of parameter value pairs tested and the coverage matrix showing the parameter values pairs tested and untested. This view can seen for any specific number of tests along the test plan (using the adjustable bar at the top of the image). This image shows 80% of the paired values have been tested after 10 test cases. Using the slider this view lets you see exactly which parameter value pairs are untested at each point. In this example after 15 tests there is 96% pairwise coverage and 19 tests covers all pairwise combinations.
    • Be conscious that your vendor has a strong conflict of interest here. 80% 2-way coverage might be sufficient for your needs on a given project but testing to that level of thoroughness (instead of the complete set of tests) could represent 50% of their billable hours in which case they may go to 100% 2-way coverage (to increase billable hours). Of course, 100% 2-way coverage may be called for, these decisions have to be made with an understanding of the software being tested.
    • Implications: ideally, you would like to have a testing vendor that is happy to openly share and discuss the tradeoffs that exist and there strategy to get you the best results based on your situation.
  4. Do your testers use a structured test design approach such as pairwise test design?
    • If so, how many of your testers regularly use pairwise test design or similar methods to design tests?
    • During the process of selling you their testing services, vendors have a strong incentive to highlight their efficiency-enhancing capabilities. As soon as the ink dries on the contract though they have an incentive to keep billable hours high.
  5. Why (and in what types of testing) do they use it?
    • Pairwise and combinatorial test design approaches can be widely used in virtually all kinds of software testing.
    • If your vendor indicates that they use these approaches only in narrow areas (e.g., most likely functional testing or configuration testing), that is a large red flag that they don’t understand these test design approaches very well at all.
    • If your vendor does not understand how to apply these test design methods broadly (including in systems integration testing, performance testing, etc.) then you can safely assume that there will be considerable wasteful redundancy in any areas where these test design approaches are not used.
  6. If we gave you 50 test scripts we already have, could you generate a set of pairwise tests for us that could test the same system more thoroughly in significantly fewer tests?
    • A knowledgeable vendor should be able to analyze your tests and provide a draft set for your review within 24 hours.
    • If the vendor takes multiple days and they need to search far and wide for an available expert to create a set of optimized set of tests for you, that indicates that the actual number of testers capable of performing this kind of test design is probably pretty low.
    • Another practical way to double-check vendor claims is to search sites like LinkedIn for keywords such as pairwise testing, orthogonal array testing, and/or Hexawise.

Making sure your software testing process is staying current with the best ideas in software testing is an important factor in creating great software solutions that your customers love. Often companies understand the need to stay current on software coding practices that are successful but fewer organizations pay attention to good practices in software testing. This often means there is a good deal to be gained by spending some time to examine and improve your software testing practices.

By: John Hunter and Justin Hunter on Mar 29, 2016

Categories: Testing Strategies, Software Testing

When thinking about how to test software these questions can help you think of ideas you might otherwise miss. These tips are useful for anyone testing software; we do integrate tips that are specfically relevant to those using the Hexawise software testing application.

  • What additional things could I add that probably won't matter (but might?)
  • What are the two parameters in the plan with the most values? And should I shrink the number of parameter values for either case?). If you have a parameter or two with many more parameter values than other parameters have that can greatly increase the number of test cases that must be run to explore every pairwise (or higher) combination. If those are critical to test, then they should be, but often a high number of parameter values is an indication that they really could be reduced. Using value expantions may well be a wise choice.
  • Do the tests cover all the high priority scenarios that stakeholders want covered? Once you generate the tests think about high priority scenarios and if they are missing add them as required test cases. It may well be worth adding tests for the most common scenarios. Often this can be done without requiring extra tests when using Hexawise. Hexawise will just consider those and then create test cases to match your criteria which often won't require an extra test. Sometimes it will add a test or a few tests to the total, in which case you can decide the benefit is worth the added cost of those tests.
  • If you're testing a process, what are the if/then points where the process diverges? Make sure the alternative processes are being tested.
  • If the plan is heavily constrained, would it make sense to split it into multiple separate plans?
  • Might it make a difference to how the system operates if you were to include additional variation ideas related to: who, what, when, where, why, how and how many? Those questions are often helpful in identifying parameters that should be tested (and occassionally in identifying parameter values to test).
  • Have you accidentally included values in your plan that everyone is confident won't impact the system? You want to test everything that is important but if you test things that are not going to impact whether the software works or not it adds cost with no benefit.
  • Have you accidentally included "script killing" negative tests in your plan? See details on this point in our training file, different types of negative tests need to be handled differently when using Hexawise
  • Have you clearly thought about the implications of the distinction between impossible to test for combinations vs combinations that will lead the system to display error messages? Impossible to test combinations can be avoided using the invalid pair feature. But situations where users can enter values that should results in error messages should be tested to validate those error messages appear.
  • Have you considered the cost/benefit trade-off of executing, say, 50% of the Hexawise generated test set vs 75% vs 100%? It is best to think about what is the most sensible business decision instead of just automatically going for some specfic percentage. Depending on the softare being tested and the impact on your business and customers different levels of testing will make sense.

Too often software testing fails to emphasis the importance of experienced software testers using their brains, insight, experience and knowledge to make judgements about what is the best course of action. Hexawise is a wonderful tool to help software testers but the tool needs a smart and thoughtful person to wield it and produce the results we all want to see.

Related: Software Testers Are Test Pilots - Create a Risk-based Testing Plan With Extra Coverage on Higher Priority Areas - How Much Detail to Include in Test Cases?

By: John Hunter and Justin Hunter on Mar 14, 2016

Categories: Testing Strategies, Software Testing, Hexawise test case generating tool, Combinatorial Software Testing, Hexawise tips

Hexawise welcomes Kelly Ross as Vice President of Sales. Kelly will be responsible for driving Hexawise growth by working with enterprise clients to introduce Hexawise tools into their testing environments. She will also build the sales organization through strategic hires to support the company’s growth.

Commenting on the appointment, Hexawise CEO, Justin Hunter, said: “Hexawise is poised for explosive growth in the coming year and I am delighted that Kelly will be joining us to spearhead that growth. Kelly has been highly successful both as a salesperson and as a sales leader, which is the perfect combination for our company at this time. We are excited to have her join our executive team.”

Kelly brings more than 20 years of sales leadership experience to Hexawise, including driving growth in software sales organizations at SAS Institute, Vignette Corporation and Intuit Health. She has recently worked as an independent consultant, helping start-ups and small businesses establish sales processes to accelerate revenue growth in their markets.

Kelly said: “I’m excited to join a team that is driving innovation in the software testing market. Enterprise software teams are using Hexawise to accelerate testing and get to market faster. Hexawise has a growing client base of highly satisfied customers leveraging the value of the tools, including more than 9000 users at Accenture. The revenue growth opportunities are tremendous and I am fortunate to be joining the team during such a pivotal time.”

One of Kelly’s first priorities will be to hire a new Sales Development Representative to work with her on generating new leads and growing the sales pipeline for Hexawise. “We are looking for intelligent, energetic sales professionals who would like to help us achieve our goals through strategic, directed outbound efforts.”

About Hexawise Hexawise is changing the way companies test software. Software testers at more than 100 Fortune 500 firms use the Hexawise software test design tool to scientifically prioritize scenarios they should execute in order to achieve higher coverage in fewer tests. Organizations like NASA have used sophisticated test design methods for years to learn as much as possible from each software test they execute. With its easy-to-use test design tool and unparalleled test design training and support programs, Hexawise has become the worldwide leader at making these sophisticated test design methods accessible to the masses.

Benefits delivered by Hexawise are dramatic and measurable, including:

(1) Time and cost savings from faster test case design (2) Time and cost savings from faster test execution (3) Decreased costs of defect resolution by identification of more bugs early in the development life cycle.

By: Justin Hunter on Feb 22, 2016

Categories: Hexawise

Hexawise - More Coverage Fewer Tests

Testers who use Hexawise consistently pack significantly more coverage into fewer software tests. It might seem counterintuitive to people unfamiliar with pairwise and orthogonal array-based testing that more thorough coverage can be achieved while using fewer tests, but this is a clear and well-established fact. This article explains how Hexawise consistently achieve superior coverage even while using fewer tests.

Time savings and thoroughness improvements achieved by testers using Hexawise at one of our insurance clients recently are typical. Let’s quickly address the first two benefits before diving deeper into a detailed explanation of how testers achieved the thoroughness improvements described in the third benefit.

Hexawise - Benefits Findings

The time savings in the test selection and documentation phase (shown in the top box) are easy enough to understand. Testers using Hexawise save time by automating selection and documentation steps that they would otherwise have to complete manually.

Similarly, the time savings in the test execution phase (shown in the middle box) are equally straightforward. Hexawise can generate fewer test scenarios compared to what testers would create on their own. Time savings in test execution come about simply because it takes less time to execute fewer tests.

So far so good. But how exactly do testers using Hexawise consistently achieve superior coverage while using fewer software tests? By helping testers generate optimized test sets without wasteful redundancies minimized, with the maximum amount of variation between scenarios, and with systematic coverage potential defects that could be caused by interactions.

Hexawise - Specific Benefits

Hexawise Minimizes Wasteful Repetition The powerful test generation algorithm inside of Hexawise systematically eliminates all wasteful repetition from every test scenario. If a given combination of test conditions has already appeared together in a test, other combinations of values will be found by the test generation algorithm and used instead of the wastefully repetitive combination. Even it if means that Hexawise’s blazingly-fast optimization algorithm needs to explore thousands of combinations of candidate values to achieve this goal. With wasteful repetition eliminated, Hexawise test sets require fewer tests to achieve thorough testing.

Hexawise Minimizes Wasteful Repetition

Hexawise Maximizes Variation Between Tests. If you take a close look at any Hexawise-generated set of tests, you will notice that variation is maximized as much as scientifically possible from one test to the next. This is the beneficial flip side of the repetition-minimization coin. Useful variation from test to test is the thoroughness-improving outcome whenever wasteful repetition is eliminated. When testers start to execute tests that explore new combinations of values and new paths through applications, they find more defects.

Hexawise Maximizes Variation Between Tests

Superior, Systematic Coverage. Interactions between different test inputs are a major source of defects. As a result, interactions between inputs are important to test thoroughly and systematically. Testers using Hexawise use a “coverage dial” in Hexawise to determine the coverage strength they would like for a given set of tests. From there, Hexawise’s test optimization algorithms systematically detect all potential interactions that are in scope to be tested for, and Hexawise ensures tests are carefully constructed to close every such potential coverage gap. Doing this kind of analysis by hand, even using tools like Excel is time-consuming, impractical, and error-prone. There are simply too many interactions for a tester to keep track of on their own. As a result, manually-selected test sets almost always fail to test for a rather large number of potentially important interactions. In contrast, the Hexawise test optimization algorithm systematically eliminates gaps in testing coverage that manually-selected test sets routinely fail to cover. Compare the coverage achieved by the project’s 37 manually-selected “business as usual” tests (above) to the more compact, efficient, and thorough set of 25 Hexawise-generated set of tests (below).

Gaps in Coverage Hexawise Optimized Coverage

In short, when testers select scenarios by hand, the outcome is typically too many tests that took too long to select and document, contain too much wasteful redundancy, and have an unknown number of potentially-serious gaps in coverage. In contrast, when testers use Hexawise to generate optimized sets of tests, they quickly generate unusually thorough sets of highly-varied tests at the push of a button that systematically achieve user-specified thoroughness goals, And testers can communicate coverage achieved by their test sets with stakeholders more clearly than ever before.

By: Justin Hunter on Dec 15, 2015

Categories: Pairwise Software Testing, Pairwise Testing, Combinatorial Testing, Business Case

The amount of detail used in a test plan will depend upon your particular context. When I'm asked how much detail to include by clients, I've started drawing a simple 2 X 2 matrix that looks like this:

 

testing-detail-depends-on-context

 

There is simply too much variation between different teams of testers and business contexts to provide a one-size-fits-all answer to this question. There are good and valid reasons that different teams around the world use very different test plan documentation approaches when it comes to test case writing styles.

 

You and your colleagues should familiarize yourself with several different approaches that other teams use. Think about the pro's and con's of using formalized and detailed script structures as opposed to the pro's and con's of other "documentation-light" approaches. Once you're aware of the options available, you should consciously adopt approach that works best for your context.

 

These sources are directly on point and provide more examples for your consideration than I can get into here:

  1. Dr. Cem Kaner's excellent piece "What's a Good Test Case?"

  2. A presentation I gave at an STP conference called "Documenting Software Testing Instructions - A Survey of Successful Approaches"

By: John Hunter and Justin Hunter on Oct 8, 2015

Categories: Testing Strategies