18 November 2010

Lessons learned from Performance Testing a complex application

After two years too busy to write any blog, I have now a bit more time to share some of the lessons I have learned while doing Performance Testing for a complex application (Enterprise-level, several millions code lines kind of application).

- Complex application usually mean lots of different components: network, hardware, software... When producing performance data, you need to gather data about how each component behaves. It is not useful to say "the application is slow" you need to say "the application has a bottleneck in the disk I/O".

- You will be able to gather pretty much all the information you need about how your system is working using the performance counters embedded into your OS (MS, Linux...).

- Use only the performance counters you need, as they consume resources. Once you realize the values provided by a counter are "good", you can stop using it.

- When the application is far from being optimized, run all the counters and make a list of the worse one grouping them by logical areas (data repository, UI, business processes...)

- With the previous list, test individual components that are described by the counter. For example you may focus on the Data repository layer, looking for the disk I/O, for hard/soft page faults etc.

- Based on the previous results, concentrate then in a single counter. Using the previous example you may focus in the disk I/O counter if the other counters look Ok.

- Focus first in the low hanging fruit: most performance problems are related to network bottlenecks, lack of memory, disk I/O and pinning CPUs. Once these are removed, you can focus in secondary problems: too many threads, unoptimized queries, unmaintained databases etc. After that, you do not have so many options. The most available would be to throw more hardware to the problem. If the Performance is still unsatisfactorily, then rearchitecting your application is usually the only option left

- Unless there are egregious errors, refactoring your code helps very little if at all.

- Use automation to create load. To measure response time to user's actions use a human with a stopwatch

This last point may be surprising for some. My experience is that introducing timers in the code doesn't work. First it is not always possible, second, in practice it is pretty much impossible to capture certain events: you can capture when an image starts to be displayed in the screen, but not when it is shown entirely. If you want to measure the user experience, better to do it by hand, assuming the role of your user.

04 November 2008

To automate or not automate, this is the question...



Whether 'tis nobler in the mind to suffer
The slings and arrows of manual testing
Or to take arms against a sea of automated test
And by developing them. To die, to pain--

.....


Anybody that has tried automation knows that automation needs to be done carefully. Managers, convinced by automated tools salespeople and Start Treck voice activated computers will try to force you to automate everything. The arguments will be the usual ones of increase productivity, results reliability and speed. However the land of automated tests is littered with corpses and failed projects.

Why? Well, based on what I have seen the foremost reason is that test automation are projects that need to be though, managed, staffed and planned as any other software development project. To be useful they need to answer the following questions:


  • Will the test be done a limited number of times or it is a test that needs to be executed regularly?
  • The requirements for the application under test, will change soon? change with time? are still undefined?
  • Is the code under test architected in such a way that a change in a component will not affect our automated test?
  • Can the code under test be automated with an off-the-shelve software?
  • Have the code under test be instrumented? Does it have hooks that can be used?
  • Is the automated test code modular?
  • Is the automated test code easy to maintain?
  • There is time allocated to document your automated test?
  • Are you testing your automated tests?


These are basic considerations that need to be answer before proceeding with an automation project. However, before this considerations it is fundamental to consider the first point Ihave made: Any automation exercice as a project in itself. This means it need to have the appropriate level of staffing, project management supervision, methodology... Failure to plan the automation project appropiately will incur the same risk as any other project: overworked staff, last minute problems, cost/time over runs...

So, my sincere recommendation to anybody trying to automate the testing of their new piece of software: treat it seriously as a subproject of your main project

17 March 2008

Do not give [free] honest advice/assesment in job interviews

Ok, this is a rant, so feel free to skip it!


About a year ago I had a job interview for a contract position. The project was complex, ill organized and with high probability of failure. A brief description of the project is:

Company A starts a job for a customer. They were going to adapt their exisiting product, based on US business processes, to the Canadian business processes. After a year, and no progress, company B purchases company A. Another year pass with nothing to show. It is now two years after the project has begun and the customer still has nothing. When company B realizes the mess, decides to cut their loses and hires company C to finish the project. Customer sets up a team to speed the project that will coordinate internal resouces with company C, who incidentlly are located in a different time zone.

At this stage I interviewed for the QA position in the coordinating team. Among other responsibilities, it includes testing the code that Company C is producing. There were no procedures, vague specs and the group of developers used the word Agile methodoly to describe what is a cowboy approach. The PM, working for the customer, has no carrot no stick over company B or company C. My assesment of the situation was somehow bleak, and I proposed measures like change control, regular risk meetings, clearly defined [and achievable] goals for each iteration... I know I sounded a bit negative, but definitely I didn't wanted to sound like all was fine: it wasn't.

A year later, and because Vancouver is an small IT town, I have learnt they hired another person that had 'Don't worry, be happy' as mantra. Six month after the hiring the PM realized it was not working. So, they started to do risk meetings, put in place a change control and each iteration has a small number of features that had to be implemented. It has took over three year for the customer to have something that is operational and can be put into production.

Conclusion: next time I want the job I need to smile, assuage all fears with 'It's going to be fine' and send a bill for any advice I may give during the interview.