Access Rights define the actions someone you share a Hexawise project with is able to take. The rights are set at the project level, for all test plans in that project.
If separate rights are desired for different test plans put those test plans in a separate project and define the access rights for that project as desired.
An administrator has full access to comment, edit, copy and add new plans to other projects. They also may add, remove and edit people's access rights on the project.
Full Access rights allow adding, commenting, editing and copying plans to other projects but not the rights to add and remove or edit people on a project.
Create and Comment rights allows the user to add new plans and comment on (but not edit) existing plans only.
Hexawise's Auto-Scripts feature creates detailed tester instructions from sets of optimized test conditions.
A feature in Hexawise that lets you add multiple parameters at the same time as a comma separated list.
Checking in the software testing context is the process of checking to see that various known facets of the software work as expected. Software checking is often largely automated with unit tests and tools such as Selenium to verify that the most obvious features, functions and other aspects of the software work as expected. It is useful to think of software checking as distinct from software testing that is a more thoughtful process requiring expertise and judgement. Software checking is often largely automated but even for checking tehre are many things where it is easiest and safest to have a human just look and verify there is nothing obviously wrong. Software checking is a critical part of software testing. But it is only a part of software testing. A danger that must be considered with software checking is that verifiying that no checks fail is only as useful as the checks people thought to include. Additionally, of course, it is only as useful as the accuracy of those tests. Related: Exploring a Cave or Software, Testing and Checking Refined
A combination may include two (pairwise) or more (combinatorial) parameters. A specific test case will include parameter values for each parameter (thus will include combinations of specific settings for parameters). The purpose of combinatorial testing is to efficiently test valid combinations (some combinations are not valid - for example, type of credit card for a cash payment) in order to catch bugs caused by interactions between combinations of values for each parameters.
Combinatorial Testing is an umbrella term used to describe Design of Experiments-based test design methods. Design of Experiments-based test design methods seek to uncover as much important information as possible in as few tests as possible. In software testing, these test case selection methods include:
Design of experiments (DoE), factorial designed experiments - A very simple explanation is a systemic approach to running experiments in which multiple parameters are varied simultaneously. This allows for learning a great deal in very few experiments and allows for learning about interactions between parameters quickly. DoE principles are at the core of how Hexawise software creates effective software test plans.
The expected outcome of a software test (for example, the user profile is created with a photo included or an error message is shown saying that an email address is required). Also see: How do I save test documentation time by automatically generating Expected Results in test scripts? Related terms: test plan - test script - required test case
Cem Kaner defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project." on page 36 of A Tutorial in Exploratory Testing. Also see: Exploratory Testing Dynamics by James Bach, Jonathan Bach, and Michael Bolton and What is Exploratory Testing? by James Bach.
The idea for "ility" testing is to capture various non-functional testable areas of software such as: usability, reusability, reliability, maintainability. Even "security testing" can be included (though it doesn't end in "ility"). To me, "ility testing" is too large and amorphous an idea to be useful. But software testers will hear the term so it is useful to understand what it means. Within "ility testing" are a large number of very important concepts but I think lumping them into one "ility" category doesn't add value. Related: Starter list of "ilities" - The 7 Software “-ilities” You Need To Know
A pair of values that are not possible. For example, the Internet Explorer browser is not available on the Mac operating system. So if two parameters in the test plan are operating system and browser we can use the invalid pair setting to indication that the parameter value of Internet Explorer is not valid when the operating system parameter value is Mac. Detailed example: Hexawise lets you set invalid pairs so that the test cases generated do not include such invalid test settings.
Married pairs allow you to mark all the parameter values as invalid in a given situation. When some of the paramater values are valid and some are invalid you must use the invalid pair option to make that clear. If all the parameters values are invalid then the married pair option is more efficient (otherwise you just have to mark every parameter value pair as invalid). For example if one of the paramaters is payment method and another is credit card type, then using the married pair setting can set the credit card type only to be used when creating a test script when the payment method is "credit card." Detailed example: how do I prevent certain combinations from appearing using the "Married Pair" feature?
A test plan with differing levels of interaction testing for different parameters. For example, instead of testing all 12 parameters for all 4 way interaction effects the test play can select 4 important parameter and use 4 way coverage for those parameters while using 3 way or 2 way coverage for the remaining parameters. Also see: How do I create a risk-based testing plan that focuses more coverage on higher priority areas
Orthogonal Array Testing (OATS) - Orthogonal Array testing and pairwise testing are very similar in many important respects. They are both well established methods of generating small sets of unusually powerful tests that will find a disproportionately high number of defects in relatively few tests. The main difference between the two approaches is that pairwise testing coverage only requires that every pair of parameter values appear together in at least one test in a generated set. Orthogonal Array-based test designs, in contrast, have an added requirement that there be a uniform distribution throughout the domain. This added requirement tends to result in significantly more tests for Orthogonal Array Testing solutions than are required for pairwise test sets.
Are the extra tests required by Orthogonal Array-based solutions (compared to pair-wise solutions) worth it? Probably not in software testing as our CEO, Justin Hunter, explains in software testing, you are not seeking some ideal point in a continuum; you're looking to see what two specific pieces of data will trigger a defect.
It is based on the principles behind design of experiments. Design of experiments and orthogonal array testing are great when you are looking at how important individual factors and interactions of factors are for achieving the highest productivity. But this isn't the problem software testing normally meant to solve. These approaches would be very good for example in evaluating different coding and architecture strategies to increase the speed of the software application for users.
Related terms: combinatorial software testing
A software testing strategy that tests all possible pairs of the parameter values. The idea is to catch bugs that are based on the interaction between two parameter values (there are bugs that only manifest themselves when two parameter values interact, so that testing each value separately may not find bugs that manifest only based on the interaction).
A factor to be tested (for example, browser type or payment method).
Related term: value expansions
The value used in a test case for a parameter. For example, if browser type is the parameter the parameter value could be: Chrome, Firefox, Internet Explorer or Safari.
Related term: value expansions
Performance testing examines how the software performs (normally "how fast") in various situations. Performance testing does not just result in one value. You normally performance test various aspects of the software in differing conditions to learn about the overall performance characteristics. It can well be that certain changes will improve the performance results for some conditions (say a powerful laptop with a fiber connection) and greatly degrade the performance for other use cases. And often the software can be coded to attempt to provide different solutions under different conditions. All this makes performance testing complex. But trying to over-simplify performance testing removes much of its value. Another form of performance testing is done on sub components of a system to determine what solutions may be best. These often are server based issues. These likely don't depend on individual user conditions but can be impacted by other things. Such as under normal usage option 1 provides great performance but under larger load option 1 slows down a great deal and option 2 is better. Focusing on these tests of sub components run the risk of sub-optimization, where optimizing individual sub-components result in a less than optimal overall performance. Performance testing sub-components is important but it is most important is testing the performance of the overall system. Performance testing should always place a priority on overall system performance and not fall into the trap of creating a system with components that perform well individually but when combined do not work well together. Most often the end purpose of performance testing is to provide users the best experience. For this reason, it is important to create test conditions similar to those your users will experience. It doesn't do much good to test your web application with a direct fiber connection (with huge bandwidth and extremely low latency) if many of your users are going to be using your software over poor wifi connections. This idea also starts to show the concepts within software testing interact with each other and are interdependent. It is possible to see the following example as more usability testing than performance testing but there certainly is a point where the 2 interact. It matters less what you call it and more that you properly test what the users will experience. Wether you call interactions between how the system performance is impacted by how the software is used (say via a fiber connection or a less than ideal wifi connection) performance testing or not isn't so important. What is important is that you test these conditions. Load testing, stress testing and configuration testing are all part of performance testing.
Re-evaluating the tests that previously passed after changes to the underlying soft are have been made. The purpose is to uncover any inadvertent impacts due to the software code updates. The hidden complexity in software code can result in changes having unforeseen affects. Our Hexawise software is very well suited to developing and maintaining a complete regression test plan with each test cases and expected result. By creating efficient test plans, Hexawise test plans provide more test coverage with the fewer test cases.
Smoke testing is done to find if the most critical and visible functions of the software work. Smoke testing can include functional and unit tests. Smoke testing can be automated checking of the software or can be done by a human. Smoke testing can also be used to flag areas that should be explored in greater detail. If for example, something works but works poorly that would often be seen as passing the smoke test but it doesn't mean that the software is ready to release to customers. As with many software testing terms the boundries can be implemented in different ways in different companies. It might be that a company includes items in smoke testing that in general wouldn't be considered part of smoke testing in the common use of the term. Smoke testing often will be a very quick process to do before starting on more detailed testing. Often it will be automated software checking with just a little bit of hands on effort to make sure there isn't some obvious big problem. Smoke testing is a part of regression testing. Related: Exploratory Testing
Software stress testing is the practice of testing the software application under exceptional conditions, conditions beyond what are expected normally. Often this involves similating large amounts of traffic to the software application. Stress testing involves looking at what happens when the software is operating under stress. In addtion to just similating lots of users on the system (or lots of traffic to the database etc.) it can look at what happens if the web server has the CPU taxed heavily (perhaps by other software application on the same server), what happens if the database server goes offline and is quickly rebooted, etc.. Stress testing includes noting if the software is able to continue to do what is required and also how it copes if it cannot complete actions. Do users get adequate notice of what has happened (do they get notice something failed or are left thinking things worked when they didn't)? Do logs accurately identify what happened? Do security measures get compromised in any of the stressful conditions? Related terms: Smoke testing - User acceptance testing Stress testing for conditions where certain parts (for example breaking an api call made by the software - so that it doesn't get a response when it expects one) of the system are removed can be called "negative testing." Note that "negative testing" also referers to test cases in which a specific "failure condition" is expected (for example, enter a character into a field that requires a number and verify that the error message displayed is correct). So just note that there are different senarios both refered to with the same term - "negative testing."
test case - the specific set of parameter values for each parameter to be used in a specific test. It includes an expected result (either explicitly or implicitly).
test plan - the full set of test cases to be tested. See: Detailed Example for Creating Pairwise Test Plans Using Hexawise (webcast) - Getting Started with a Test Plan.
Instructions for the software tester on a specific test case. Also see: How do I Auto-Script test cases to quickly include detailed written tester instructions in my tests?
Used to validate a specific feature works as expected. A unit is the smallest testable part of code. Unit test are often automated so that they automatically run when code is updated. Doing so provides the programmer an alert if the code change has broken a feature that had been working. This allows the code to be updated before it is put into production. Unit tests would be used to validate things that should not be allowed are not allowed - for example entering a character in a field that requires a numeral. The unit test would verify that the software does not allow invalid entries. So even for very simple features there are likely to be several unit tests, to verify the software acts appropriately to various possible user actions. Related terms: regression testing - test driven development (TDD) - integration testing - agile software development - mock objects
Usability testing is the practice of having actual users try the software. Outcomes include the data of the tasks given to the user to complete (successful completion, time to complete, etc.), comments the users make __and__ expert evaluation of their use of the software (noticing for example that none of the users follow the intended path to complete the task, or that many users looked for a different way to complete a task but failing to find it eventually found a way to succeed). Usability testing involves systemic evaluation of real people using the software. This can be done in a testing lab where an expert can watch the user but this is expensive. Remote monitoring (watching the screen of the user; communication via voice by the user and expert; and viewing a webcam showing the user) is also commonly used. In these setting the user will be given specific tasks to complete and the testing expert will watch what the user does. The expert will also ask the user questions about what they found difficult and confusing (in addition to what they liked) about the software. The concept of usability testing is to have feedback from real users. In the event you can't test with the real users of a system it is important to consider if you are fairly accuratately representing that population with your usability testers. If the users of the system of fairly unsophisticated users if you use usability testers that are very computer savy they may well not provide good feedback (as their use of the software may be very different from the actually users). "Usability testing" does not encompass experts evaluating the software based on known usability best practices and common problems. This form of expert knowledge of wise usability practices is important but it is not considered part of "usability testing." Related: Usability Testing Demystified - Why You Only Need to Test with 5 Users (this is not a complete answer, it does provide insite into the value of quick testing to run during the development of the software) - Streamlining Usability Testing by Avoiding the Lab - Quick and Dirty Remote User Testing - 4 ways to combat usability testing avoidance
Testing by users of the software. Rather than testing the software against a requirements document, a user tries the software based on their experience. A subject matter expert will often test the software as a representative of the business unit for which the software was created (either for their own use of for the use of their customers).
Value expansions are used to create test plans when pairwise testing every parameter value is not worth the effort that would be required. This comes into play when there are many values for a parameter. If you require pairwise testing of 40 different values that will greatly increase the test needed in the test plan. Most often when a large number of values are possible each value is not critical. For example, you may wish to test states in the USA but there are really only 3 different ways the software should react. Value expansions let you create test plans with actual values while at the same time not requiring a huge number of test scripts. See a detailed example using 9 categories of cars and 45 car models. Related: Using Value Expansions and Value Pairs to Handle Dependent Values
Value pairs - pairs of parameter values, normally used include invalid pairs and married pairs (those value pairs that are interdependent with each other). For example: When role = student the classification is either Freshman, Sophomore, Junior, or Senior When role = staff the classification is either Adjunct, Assistant, Professor, or Administrator So in this example role and classification are value pairs in that the valid values for classification are dependent on the role. See: Using Value Expansions and Value Pairs to Handle Dependent Values