14 January 2012

I am subscribed to several mailing lists. One of them is Software Test Professionals (http://www.softwaretestpro.com). They organize training and lectures, and in November it was offered one with the title "My Testing Metrics Allergy" (by Dawn Haynes, more information at http://www.softwaretestpro.com/Item/5332/?utm_source=Email&utm_medium=email&utm_content=111611-ONLINE-SUMMIT-&utm_campaign=TRAINING). The title and the topic stroke a chord. I have been in too many meetings where BA, PM, SME and others with a tester hat on their head uttering ridiculous sentences like "testing is 78.12% completed". Why this sentence is ridiculous? Well, for several reasons:

a) The two digits precision have no sense. This simply shows the organization has spend a lot of money and effort in a software program that keeps tally of how many test scripts have been executed versus the ones that have not been executed. Given the nature of software testing (see below) this precision is meaningless. In most of the cases the most can be said is "Testing is approximately 25/50/75 % completed".

b) The statement implicitly assumes all test scripts are equal. They are not. Let's use the following example:
- Test script to validate the correct position of the corporate logo.
- Test script to measure the time to respond for a user, when there are already 1000 users in the system

the first is straightforward to execute and set up. The second... well, if nothing else will require several measurements (to get some statistics), some data analysis and correlation with other performance indicators of the system. In addition the set up will be nothing but straightforward.To give to both test scripts the same weight when calculating the percentage of completion of testing is to have a skewed view of reality.

c) The percentage does neither account for missing requirements more requirements that have some, but not the entire necessary test to cover them. In theory it should not happen, but in reality it does.

d) Usually test scripts may cover more than one requirement. If mapping between requirements and test cases is not perfectly done, the percentage of completion will be misleading.

e) When you test, you venture into an unknown territory where you may discover surprises that anybody expected. Surprises that were not hinted by any requirement. How these surprises are reflected in the percentage of completion value? The answer is that they are not.


These are a few of the problems a very precise percentage of completion involves. In other words, the error bars around the number are much larger than 0.01%, or even larger than 10%

So, how you can evaluate where you are in your testing effort? Well, it depends on the project and/or your experience with similar projects. In my case, I like to use the bug convergence and zero bug bounce milestones (check http://technet.microsoft.com/en-us/library/bb497042.aspx for clarification of these concepts in the context of the stabilization phase of a software project). Based on my experience, these milestones provide, in the context of an integration project, the 30% and 60% completion marks. It is less impressive that being able to say 29.78% or 61.07%, but, in my opinion, it is much more honest.