I am subscribed to several mailing lists. One of them is Software Test Professionals (http://www.softwaretestpro.com). They organize training and lectures, and in November it was offered one with the title "My Testing Metrics Allergy" (by Dawn Haynes, more information at http://www.softwaretestpro.com/Item/5332/?utm_source=Email&utm_medium=email&utm_content=111611-ONLINE-SUMMIT-&utm_campaign=TRAINING). The title and the topic stroke a chord. I have been in too many meetings where BA, PM, SME and others with a tester hat on their head uttering ridiculous sentences like "testing is 78.12% completed". Why this sentence is ridiculous? Well, for several reasons:
a) The two digits precision have no sense. This simply shows the organization has spend a lot of money and effort in a software program that keeps tally of how many test scripts have been executed versus the ones that have not been executed. Given the nature of software testing (see below) this precision is meaningless. In most of the cases the most can be said is "Testing is approximately 25/50/75 % completed".
b) The statement implicitly assumes all test scripts are equal. They are not. Let's use the following example:
- Test script to validate the correct position of the corporate logo.
- Test script to measure the time to respond for a user, when there are already 1000 users in the system
the first is straightforward to execute and set up. The second... well, if nothing else will require several measurements (to get some statistics), some data analysis and correlation with other performance indicators of the system. In addition the set up will be nothing but straightforward.To give to both test scripts the same weight when calculating the percentage of completion of testing is to have a skewed view of reality.
c) The percentage does neither account for missing requirements more requirements that have some, but not the entire necessary test to cover them. In theory it should not happen, but in reality it does.
d) Usually test scripts may cover more than one requirement. If mapping between requirements and test cases is not perfectly done, the percentage of completion will be misleading.
e) When you test, you venture into an unknown territory where you may discover surprises that anybody expected. Surprises that were not hinted by any requirement. How these surprises are reflected in the percentage of completion value? The answer is that they are not.
These are a few of the problems a very precise percentage of completion involves. In other words, the error bars around the number are much larger than 0.01%, or even larger than 10%
So, how you can evaluate where you are in your testing effort? Well, it depends on the project and/or your experience with similar projects. In my case, I like to use the bug convergence and zero bug bounce milestones (check http://technet.microsoft.com/en-us/library/bb497042.aspx for clarification of these concepts in the context of the stabilization phase of a software project). Based on my experience, these milestones provide, in the context of an integration project, the 30% and 60% completion marks. It is less impressive that being able to say 29.78% or 61.07%, but, in my opinion, it is much more honest.
14 January 2012
12 September 2011
In this moment I have changed my tester hat for a project manager, or better said, general contractor hard hat. I am building what I hope will be my future house.
It's a lot of excitement, a lot of frustration, and a lot of opportunities to learn. And one of the things I have learnt is that professions are not so different from one another.
A few weeks ago the structural engineer -the guy that make sure the architect design will hold in case of storm, high winds, earthquake and so on- presented me the structural drawings. He has a reputation of preferring to overdue things. However, he was very frank: the structure he designed will ensure that in case of earthquake -the most significant hazard in my corner of the woods- the house will not collapse. However, after an earthquake, the house can not be inhabitable. To continue being useful after an earthquake (think of hospitals, fire stations and so on....) you need to spend more money and time.
This is a situation identical to software testing: to execute all possible testing paths and areas (functionality, performance, stress, security, usability....) will require an enormous effort that you may choose not to do. The important aspect is the decision needs to be made rationally and even more important, documented. All the stakeholders should be aware of the limitations of the testing effort and the trade off that have been made.
And trade off are a part of life, and part of software testing. Unless you are testing a flight control system, for example...
It's a lot of excitement, a lot of frustration, and a lot of opportunities to learn. And one of the things I have learnt is that professions are not so different from one another.
A few weeks ago the structural engineer -the guy that make sure the architect design will hold in case of storm, high winds, earthquake and so on- presented me the structural drawings. He has a reputation of preferring to overdue things. However, he was very frank: the structure he designed will ensure that in case of earthquake -the most significant hazard in my corner of the woods- the house will not collapse. However, after an earthquake, the house can not be inhabitable. To continue being useful after an earthquake (think of hospitals, fire stations and so on....) you need to spend more money and time.
This is a situation identical to software testing: to execute all possible testing paths and areas (functionality, performance, stress, security, usability....) will require an enormous effort that you may choose not to do. The important aspect is the decision needs to be made rationally and even more important, documented. All the stakeholders should be aware of the limitations of the testing effort and the trade off that have been made.
And trade off are a part of life, and part of software testing. Unless you are testing a flight control system, for example...
02 March 2011
A few months ago, I posted the following question to a test discussion group
Unfortunately, only Jason M. Morgan replied to my post. His comments are
So, clearly, this topic didn't awake the imagination of testers. However, I still think that Agile, as currently formulated and applied, doesn't make life easier for tester and I will argue that to other trades involves in the software development process.
I think that Agile contains a lot of good principles and it makes a lot of sense to use it in most projects, specially the small/middle size. But I would like to see Agile taking into account the needs of Business Analyst, Technical Writers, Trainers, Infrastructure, QA and QC. For example, it is very difficult for a Technical Writer to put a story into words without really knowing how the other stories have been implemented. The same for a tester: it is possible to test the flow of a single story, but it is much harder to imagine how this story interacts with others if they exist only in concept (think about processing a payment as one story, but the create payment story still is under development). It is in this context that I say Agile has shifted work from development to the other areas. Now the question is not to throw Agile away, but how to extend its benefits to everybody that contributes to the completion of a project.
Is Agile Testing oxymoron?
Does Agile make life for testers more difficult?
Agile was though by developers to solve specific developers' issues. However, for other trades like QA Agile has done not much, and arguably has complicated our life a bit more, if nothing else creating false expectations: integration testing, performance testing, stress testing and so on can not follow an Agile model, despite some project managers wish it could.
I would to start a discussion around this topic, if nothing else to gauge the opinion of fellow QA engineers. If nothing else may provide some food for thought for project managers that think that vague user stories with changing details are enough to develop a good test script!
Unfortunately, only Jason M. Morgan replied to my post. His comments are
My most recent employer was using Scrum. The QA activities related to the new features being developed in the scrum team fit easily into the sprints. But there were larger testing activities such as regression testing that were to big for one team or even one sprint. So I had to break down the regression testing into small pieces that could be assigned to each team within a sprint. It was a lot of overhead for no benefit other than following the scrum model.
So, clearly, this topic didn't awake the imagination of testers. However, I still think that Agile, as currently formulated and applied, doesn't make life easier for tester and I will argue that to other trades involves in the software development process.
I think that Agile contains a lot of good principles and it makes a lot of sense to use it in most projects, specially the small/middle size. But I would like to see Agile taking into account the needs of Business Analyst, Technical Writers, Trainers, Infrastructure, QA and QC. For example, it is very difficult for a Technical Writer to put a story into words without really knowing how the other stories have been implemented. The same for a tester: it is possible to test the flow of a single story, but it is much harder to imagine how this story interacts with others if they exist only in concept (think about processing a payment as one story, but the create payment story still is under development). It is in this context that I say Agile has shifted work from development to the other areas. Now the question is not to throw Agile away, but how to extend its benefits to everybody that contributes to the completion of a project.
Subscribe to:
Posts (Atom)