October 15, 2009

Software Testing Means Thorough Analysis

I am a software developer, so it might look strange to see that the subject of my first post is related to testing. However I feel that justice needs to be done. Not long ago I heard, yet again, someone claiming loudly that having the developers doing testing is a waste of money. The context: a discussion about agile software development in which the same person was saying that every member of an agile team must develop multiple skills and be ready to do more than one job if needed. Apparently everybody meant in this case only the testers.

I am a software developer who has worked for about 10 years in this industry. After the first five years I took some time off from programming and worked for almost two years as a tester. I really wanted to learn more about the trade and the better I became at testing, the more I learned about programming and programmers. I will not go into the discussion about why you need professional testers. Joel Spolsky has a full post on the subject on his blog. I will not even share with you how exciting and fun it is to work as a tester. Harry Robinson does it better than I am able to.

The one thing I will do is to argue that good testing is one of the most thorough analysis activities in a software project. It would be enough to mention the two questions that lay the foundation of any decent software testing: How does the software work? and How should the software work? Those are already fundamental analysis questions. If you argue that the answers are already provided by the designated analysts, architects and programmers, I will use James Bach's words: "Programmers have no clue what the boundaries are".

Once a good tester has passed over the initial questions she will bring out the big guns: Why does the software behave like it does? and Why should the software behave like it does?. Let me explain what could fall under those questions:
  • Is the observed behavior intentional or accidental? The test may pass, but when you take a closer look at the logs you notice that it is merely an accident. Changing the input just a little or repeating the test several times will make the application go wild.

  • Does the software follow common sense? Sometimes the software behavior follows the specifications, but not the common sense. A radio button used as a label, a network protocol using 10 pairs of messages for the initial handshake or one that redefines every message of a well known specification are few wild examples, slightly exaggerated, but nevertheless, derived from reality.

  • Sometimes nonsense behavior is explained by programmers using this innocent phrase: "It has always worked this way". A good tester will challenge such thin explanations.

  • A variation of the previous case is the "guru specification". I worked once with a tester who was trying to understand a nonsensical part of the functional specification. He found out that the section was written by The Architect. He went to the architect and started questioning the spec: why should this part work like this, why should that part work like that. It did not take long and the architect got annoyed and answered: "It is like this because I said so." The tester replied: "I see... But now, seriously, dude, why should it work like this?".
When the why questions are finished the difficult part begins: How does the software really work? and How should the software really work? Only few projects reach this phase and not many testers are able to perform it. The usability testing is one example from this category, but it is only a scratch on the surface.

Can and should the programmers do testing? All the analysis skills needed for testing are also an excellent asset for a developer. However, based on my personal experience so far, the testing environment seems to encourage more the non-conformism and the ability to challenge established views. I will not claim that all programmers can become good testers. But all the programmers would learn a lot more about the software and about their colleagues if they performed testing often. 

By the way, if you agree with my reasoning, all of the above are more arguments for doing the testing early. You do not want your perhaps best analysis phase to be done late in the project, do you?

As for the programmers shouting "I should and will not do any testing because it would be waste of my precious programmer value", letting them write code might prove to be the real waste of money at the end of the day.

No comments:

Post a Comment