Thursday, November 17, 2011

When you Feel Rejected…

It is common to see a bug rejected as “Not a requirement”. It sometimes hurts as it pushes aside your valuable feedback with a process related excuse.
Common examples are
·         When a requirement includes implementation details and the devil (our bug) is in those details – the bug is actually in the requirements.
·         When an issue is detected by using an oracle other than the official requirement (for example one of the HICCUPPS heuristics).
Some less logical examples that I’ve actually seen:
·         When the fix involves someone who is not committed to the effort yet – for example when a Platform bug requires a Software workaround, especially if the effort is big. “Not a requirement” here actually means “Not my responsibility”.
·         When a bug is the result of a design limitation. “Not a requirement” here is actually “It’s not my fault, it’s the Designers fault” and many times the “Bug fix is too expensive”.
Choose the playing field according to the context.
There’s a big field of product value that includes a smaller field of the requirements scope. I play in both. When in order to find this disputable bug, we kicked the ball to the big field, when someone moved its status to “Not a requirement”, he kicked the ball to a smaller field.

Now it’s your turn to select your move according to the context:
1)      Accept the bug rejection
Sometimes the other side of the coins’ argument has validity.
2)      Kick the ball within the requirements field
While the “requirements – yes or no?” argument limits the discussion, if you are able to win it, it will be easier to lead the bug to fix, as the bug handling process is usually more efficient and faster than the requirements definition and approval process. Beware of being too persuasive and winning the argument without a proper reviewer.
3)      Kick the ball to the big field of value again
When the rejection is correct process-wise but not product value wise, it’s time to play in the big field with the big boys. Advocate your bug to stakeholders and decision makers, learn more from customer support and architects or submit a requirement change request. Running in this field is long distance, scoring a goal is much rarer, but this is where you will meet the professional players and improve your own skills.
While the requirements discussion can be more or less relevant, playing beyond it might bring the best rewards.

Sunday, July 24, 2011

The Double Sin of the Early Perfect Test Case

Since I started leading my current testing team, I’ve been struggling with the test case base.

There are a few factors that made the test case base clumsy and outdated. One of the most stunning facts for me was that even test cases that demonstrated big investment in details were often out dated. Despite the large investment, some details were wrong, had changed, had never been true, or were outdated. Often, you could find new testers struggling to understand and execute the test baseline.

The First Sin: Detailed Gen 0 Test Cases

In my experience when test cases are created before the test designer sees and experiences the product, it’s more than likely that they will not be accurate.

The reason for the failure is the limitation of our mind to perfectly imagine an abstract design. Sometimes even the designers doesn’t have a 100% complete design. While you can plan many things ahead of time, you can also anticipate that you will have gaps in your planning, but not anticipate their exact location.

The Second Sin: Detailed Gen 1 Test Cases

What about tests that successfully made it from Gen 0 to Gen 1 and proved to be correct? What about tests that were designed after the product was introduced and tried? They might not suffer from the first sin, but they will suffer from the second sin. Although these tests were accurate in the assumptions about the product itself, not all of them were the correct ones to run. Moreover, some of the tests that did a great job for Gen 1, finished their duty. Using these tests in regression will not be efficient.

As we progress with the test execution, we learn more about the risks of the product. At the end of the first generation testing we can plan better regression testing for the next generations. Typically we will add a small number of test cases and get rid of a larger amount of tests.

Conclusion: investing in too many detailed test cases during Gen0 and Gen1 is not efficient.

I’ll try to define basic guidelines to deal with this issue:

1) Lower the expectations from Gen0 and Gen 1 test cases – understand the built-in limitation of these test cases: Gen 0 might be inaccurate and Gen 1 will not fit your regression needs.

2) Seek for alternatives when planning Gen 0 and Gen 1 test cases. For example, use checklists instead of steps (See “The Value of Checklists and the Danger of Scripts” by Cem Kaner).

3) Try to thinks of better uses of your time during test planning. For example, invest in automation infrastructure during preparation.

4) Realize that moving from Gen 1 o Gen 2 will require more time in test documentation, and is not just copy-paste from Gen0-1 test cases. In this stage, you can save time by creating less “perfect” test cases of new features in the same product introduced at this time.

5) Consider the possibility that you will come to like your lean Gen 0 and Gen 1 test cases so much, that you won’t want to invest in more details for the regression test case base.

In case you claim that your experience is different and it’s possible to create perfect re-usable test plans in early stages, I can think of the following possibilities:

1) You are a better product and test designer than the ones I work with (please mentor me).

2) You don’t have complex and innovative products like the products that I test.

3) You follow a perfect process that prevents you from falling in such traps (and I would like to hear more about it).

Sunday, May 8, 2011

Is there a Pesticide paradox in testing?

As a tester, I have heard and read the term “Pesticide paradox “ on many occasions . However, I do not feel comfortable with it so I avoid using it. In the last few days, I decided to examine it more carefully – does this term makes sense? I did some googling to explore the common use of the term in SW testing, the definition of the paradox in the real pesticide world, and to try to give an answer to the question.

The original paradox is explained in Wikipedia :
“The Paradox of the pesticides is a paradox that states that by applying pesticide to a pest, one may in fact increase its abundance. This happens when the pesticide upsets natural predator-prey dynamics in the ecosystem.” I'll refer to this definition as the "original".

The common use of the term in Testing as I experienced it, is to describe that when using scripted tests (automated in most cases), which are repeated over and over again, eventually the same set of test cases will no longer find any new defects (I took a quote from the ITCQB syllabus which is great source to find "terms of common use"). I'll refer to this definition as the "common use"

I also found the explanation that "A static set of tests will become less effective as developers learn to avoid making mistakes that trigger the tests." (A paper by Rex Black ).

I could say that the common use of the term is to describe that repeating the same checks tends to yield less bugs from run to run. I agree that usually this is true – according to my experience, since bugs get fixed, and usually when no major changes are introduced and no very bad development occurs, products become more stable from release to release also, the development learning explanation is logical.

If we try to correlate between the Testing common use of the term and the original one, we will be able to see a very loose connection – SW bugs do not increase due to the fact that you repeat some checks, and moreover, where is the paradox here? If you ask a question over and over, it is very likely that most of times you'll get the correct answer. I don’t call this a paradox.

When you analyze a term, it is good practice to read the source. I on't have the book Software testing techniques by Boris Beizer, from which is the term origin, but thanks to  , I found it:
First law: The pesticide paradox. Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective.
- Boris Beizer - Chapter 1, Section 1.7. Boris notes that farmers solve this problem by planting sacrifice crops for the bugs to eat, and laments that programmers are unable to write sacrifice functions., Software testing techniques by Boris Beizer , ISBN: 0442206720. I'll refer to this quote as "Bezier's".

Well, that makes sense too, and is a good foundation law before you learns about methods – any method is not fully effective. Like with the common use term, I don't see the paradox.

I'll summarize my conclusions on the subject

• The connection between the original term – the biological phenomena of the "Pesticide paradox" and the common use in the testing world is mostly due to the use of the term “bug” to describe a defect, and that the original paradox deals with type of in efficiency when trying to pesticide pests.

• A clear logical paradox appear in the original phenomena – you kill bugs, but this increases their abundance, while the Bezier's and the common use of the term talk about less efficiency, not a paradox. A possible response to this statement will be to argue that when you do something and it is not efficient this is a paradox, to my taste this is too apologetic argument.

• The original SW testing usage quote from Bezier is a warning about relying on a sole method, while the common use by others, which usually refer to Bezier as the source, is to describe the decreased efficiency of repeating a scripted test.

It’s fine to use a cool term with loose analogy to describe your idea, but as you can see in our example, this might cause others to "steal" your term to describe other things (and worse – reference you as the source). In addition, it will be hard to convince people with critical thinking to use your term. I will leave the pesticide paradox to its original meaning.

Tuesday, January 4, 2011

Note about terminology

Sometimes, It’s all about branding. When you want to sell a product or an approach, using terminology that will "sell" your approach to your stakeholders or the professional community has impact on the chances that it will be accepted .
When Fred Hoyle coined the term Big Bang during a 1949 BBC radio broadcast, he did not anticipate that he is doing a branding service to the competitive theory. According to Hoyle, who favored an alternative "steady state" cosmological model, he used the striking image to highlight the difference between the two models. Probably he did it too well :-)

Markus Gärtner in his post Active vs. passive testing  introduce refreshing terminology for what we use to call Testing Vs. Checking or Exploratory Vs. Scripted. He uses the terms Active Vs. Passive testing. He also talk about the role of judgment which is part of being active, but basically, the new pair of active Vs. Passive comes to describe Research, critical, exploratory approach versos executing the planned tests, checking and following defined scripts.

This new terminology has some benefits on the terms we are used to. It's not using a term that we already use to describe wider area like "Testing". And unlike the term exploratory it's not suffer from the "unstructured" public image.