Matthew Heusser's Funny and Insightful "Fishing Maturity Model" Analogy

By Justin Hunter · Oct 12, 2009

Matthew Heusser [link updated], an accomplished tester, frequent blogger [link updated], and insightful contributor in the Context Driven Testing mailing list, and a testing expert whose opinion I respect a lot, has just published a very thought-provoking blog post that highlights an important issue surrounding "PowerPointy" consultants in the testing industry who have relatively weak real world testing chops. It's called The Fishing Maturity Model [link updated].

Matthew argues that testers are well-advised to be skeptical of self-described testing experts who claim to "have the answer" - particularly when such "experts" haven't actually rolled their sleeves up and done software testing themselves. In reading his article, I found it quite thought-provoking, particularly because it hit close to home: while I'm by no means a testing expert in the broader sense of the term, I do consider myself to know enough about combinatorial test design strategies applicable to software testing to be able to help most testing teams become demonstrably more efficient and effective... and yet, my actual hands-on testing experiences are admittedly quite limited. If I'm not one of the guys he's (justifiably) skewering with his funny and well-reasoned post (and he assures me I'm not; see below), a tester could certainly be forgiven for mistaking me for one based on my past experiences.

Matthew's Five Levels of the Fishing Maturity Model (based, not so loosely, of course on the Testing Maturity Model, not to mention CMM, and CMMi)...

 

The five levels of the fishing maturity model: 1 – Ad-hoc. Fishing is an improvised process. 2 – Planned. The location and timing of our ships is planned. With a knowledge of how we did for the past two weeks, knowing we will go to the same places, we can predict our shrimp intake. 3 – Managed. If we can take the shrimp fishing process and create standard processes – how fast to drive the boat, and how deep to let out the nets, how quickly, etc, we can improve our estimates over time, more importantly. 4 – Measured. We track our results over time – to know exactly how many pounds of shrimp are delivered at what time with what processes. 5 – Optimizing. At level 5, we experiment with different techniques; to see what gathers more shrimp and what does not. This leads us to continual improvement.

 

Sounds good, right? Why, with a little work, this would make a decent 1-hour conference presentation. We could write a little book, create a certification, start running conferences …

 

And the rub...

The problem: I’ve never fished with nets in my entire life. In fact, the last time I fished with a pole, I was ten years old at Webelo’s camp.

I posted the following response, based on my personal experiences: Words in [brackets] are Matthew's response to me.

Matthew,

Excellent post, as usual. [I'm glad you like it. Thank you.]

You raise very good points. Testers (and other IT executives) should be leery of snake oil salesmen and use their judgment about “experts” who lack practical hands-on experience. While I completely agree with this point, I offer up my own experiences as a “counter-example” to the problem you pointed out here.

3-4 years ago, while I was working at a management consulting and IT company, (with a personal background as an entrepreneur, lawyer, and management consultant – and not in software testing), I began to recommend to any software testers who would listen, that they start using a different approach to how they designed their test cases. Specifically, I was recommending that testers should begin using applied statisitics-based methods* designed to maximize coverage in a small number of tests rather than continuing to manually select test cases and rely on SME’s to identify combinations of values that should be tested together. You could say, I was recommending that they adopt what I consider to be (in many contexts) a “more mature” test design process.

The reaction I got from many teams was, as you say “this whole thing smells fishy to me” (or some more polite version of the rebuttal “Why in the world should I, with my years of experience in software testing, listen to you – a non-software tester?”) Here’s the thing: when teams did use the applied statistics-based testing methods I recommended, they consistently saw large time reductions in how long it took them to identify and document tests (often 30-40%) and they often saw huge leaps in productivity (e.g., often finding more than twice as many defects per tester hour). In each proof of concept pilot, we measured these carefully by having two separate teams – one using “business as usual” methods, the other using pairwise or orthogonal array-based test design strategies – test the same application. Those dramatic results led to my decision to create [Hexawise](http://www.hexawise.com/users/new, a software test design tool. [Point Taken ...]

My closing thoughts related to your post boils down to:

  1. I agree with your comment – “There are a lot of bogus ideas in software development.”

  2. I agree that testers shouldn’t accept fancy PowerPointed ideas like “this new, improved method/model/tool will solve all your problems.”

  3. I agree that testers should be especially skeptical when the person presenting those PowerPointed slides hasn’t rolled up their sleeves for years as a software testing practitioner.

Even so…

  1. Some consultants who lack software testing experience actually are capable of making valuable recommendations to software testers about how they can improve their efficiency and effectiveness. It would be a mistake to write them off as charlatans because of their lack of software testing experience. [I agree with the sentiment that sometimes, people out of the field can provide insight. I even hinted at that with the comment that at least, Forrest should listen, then use his discernment on what to use. I'm not entirely ready to, as the expression goes, throw the baby out with the bathwater.]

  2. Following the “bogus ideas” link above takes readers to your quote that: “When someone tells you that your organization has to do something ‘to stay competitive,’ but he or she can’t provide any direct link to sales, revenue, reduced expenses, or some other kind of money, be leery.” I enthusiastically agree. In the software testing community, in my view, we do not focus enough on gathering real data** about which approaches work (or -ideally- in what contexts they work). A more data-driven management approach [link updated] would help everyone understand what methods and approaches deliver real, tangible benefits in a wide variety of contexts vs. those methods and approaches that look good on paper but fall short in real-world implementations. [Hey man, you can back up your statements with evidence, and you're not afraid to roll up your sleeves and enter an argument. I may not always agree with you, but you're exactly the kind of person I want to surround myself with, to keep each other sharp. Thank you for the thoughtful and well reasoned comment.]

-Justin

 

Company – http://www.hexawise.com
Blog – http://hexawise.com/blog [link updated]

 

*I use the term “applied statistics-based testing” to incorporate pairwise, orthogonal array-based, and more comprehensive combinatorial test design methods such as n-wise testing (that can capture, for example, all possible valid combinations involving 6-values).

**Here is an article I co-wrote which provides some solid data that applied statistics-based testing methods can more than double the number of defects found per tester hour (and simultaneously result in finding more defects) as compared to testing that relies on "business as usual" methods during the test case identification phase.