by Greg Gauthier

I briefly outline three common industry misconceptions that James Bach, et. al., hint at in the book "Lessons Learned in Software Testing"; I further argue, that they arise out of an archaic understanding of the role of testing, and often negatively affect the way testing, as a business function, is evaluated.

1. The myth of ignorance

The first, and perhaps the most pervasive of the old testing myths, is the notion that testers are – and must be, by definition – ignorant of the software they are testing. This assertion is often associated with a traditional "black box" testing principle, in which ignorance of the application's source code somehow prevents the myopia suffered by the developer. But knowledge of the product extends far beyond just the underlying code. That broader view can have a significant impact on testing. In lesson 22 of James Bach’s book, the authors point out that:

...The more you learn about a product, and the more ways in which you know it, the better you will be able to test it. But if your primary focus is on the source code and tests you can derive from the source code, you will be covering ground the programmer has probably covered already...

The point is that the tester is actually limiting himself by focusing on the codebase, rather than engaging the full breadth of his knowledge. Distinctions in the domains of knowledge do not necessitate a hierarchy of value. It is certainly possible that on some projects, testing may not be as necessary as other roles. But this is a circumstantial pressure on the value of the role, not a structural one. The tester demonstrates their value to the team precisely by bringing to the table a different skill-set and a different domain of knowledge, than the developer.

2. The myth of certainty

The myth of certainty, loosely stated, is the belief that testing will grant your project the blessing of certitude against failure. It is this false belief that drives impulses like quality "gatekeeping", and release "certification" exercises. As the authors of Lessons Learned clearly warn in lesson 30:

Beware of tests that purport to validate or certify a product in a way that goes beyond the specific tests you ran. No amount of testing provides certainty about the quality of the product.

This is due to the kinds of questions that our tests are answering. As with any good scientific experiment, the best an experiment can offer is that the hypothesis is not falsified. Cumulatively, then, we can only say that the product could not be demonstrated to be defective relative to the ways we tried to define, and discover those defects. Bach, et. al., state it very succinctly in lesson 35, this way:

In the end, all you have is an impression of the product. Whatever you know about the quality of the product, it’s conjecture. No matter how well supported, you can’t be [absolutely] sure you’re right.

One thing the authors do not address directly, is how many project teams are extremely uncomfortable with having this knowledge made conscious. Uncertainty is one of the most unwelcome states of mind in most areas of our lives. Software projects are no exception. The software tester should not fall into the trap of thinking that they can somehow provide this certainty. Neither should managers or other team members fall into the trap of thinking that this devalues the role of testing. To believe that it does, is a mistake.Instead of letting anxiety drive brittle pursuits for concrete certainty about the product, teams should strive for an agreed upon degree of confidence that promises made to customers are being kept, taking conscious account of the potential risks. This more honest assessment of the state of the product will facilitate better decision-making, and minimize the number of unpleasant surprises that arise after release.

3. The myth of precision

This myth is one primarily harbored in the minds of my fellow testers. And Bach, et. al., describe it perfectly, almost as an afterthought, in lesson 32:

If you expect to receive requirements on a sheaf of parchment, stamped with the seal of universal truth, find another line of work... A tester who treats project documentation (explicit specifications of the product) as the sole source of requirements is crippling his test process.

This is one way in which the myth of certainty shows up in the tester's own mindset. A tester who is expecting their team to be more certain about the desired state of the product than they are about the actual state of the product, is deceiving themselves, and treating their team-mates unfairly.The origin of this myth, it seems to me, harkens to the days of factory testing, where teams of testers are given fixed lists of requirements and test cases from manufacturing engineers and designers, and are expected, much like the factory's assemblers and packers, to simply execute their piece-work.By contrast, the authors of “Lessons Learned” describe a highly collaborative process of "Conference, Inference, and Reference", when gathering requirements for software testing. My own experience in the field is very much consonant with this description. Particularly in Agile environments, a good tester must be extremely persistent and flexible when attempting to discover all the implicit and explicit expectations for any given feature. What’s more, in an environment where requirements are dynamically defined as part of an ongoing set of interactions between team members, the best negotiators will set the standard for how the product's requirements are set, propagated, and refined. Testers, clearly, have a significant role to play in that effort.

Greg is an experienced test engineer with a deep background in a variety of technical environments. Originally from Chicago, he has worked in both enterprises and startups in the US, Europe, and the UK.