Based on my experience, over dozens of pilot projects where we've gathered hard data, many software testers would literally more than double their productivity overnight on many projects if they used combinatorial test design methods intelligently (in comparison to selecting test case conditions by hand).
In this 10 project study, Combinatorial Software Testing Case Studies, we found 2.4 times more defects per tester hour on average when we compared testers who executed manually-selected test cases to testers who executed test cases created by a combinatorial testing algorithm designed to achieve as much coverage as possible in as few tests as possible.
How many participating testers thought they would see dramatic increases before they gathered the data? Almost none (even the testers told about the prior experiences of their other colleagues on similar projects). How many participating testers are glad that they took the time to use the scientific method?
hypothesis
experiment
evidence
revise world-view
Every one of them.
What stops more people from using the scientific method on their projects and gathering data to prove or disprove hypotheses like the one addressed in the study above? A pilot could take one person's time for less than 2 days. If past experience is any indication of future results (and granted, it isn't always), odds would appear pretty good that results would show that productivity would double (as measured in defects found per tester hour).
What's stopping the testing community from doing more such analysis? Perhaps more importantly, what is stopping you from gathering this kind of data on your project?
Additional empirical studies on the effectiveness of software testing strategies would greatly benefit the software testing community.
Related: Hexawise case studies on software testing improvement (health insurance, IT consulting and mortgage processing studies) - How Not to Design Pairwise Software Tests - 3 Strategies to Maximize Effectiveness of Your Tests