October 16, 2009

Refactoring analogies

One of the interesting sides of the software development being a relatively young profession is the lack of established mental models for different activities occurring in the industry. one of the effects is the use of the analogies to other human activities as a way of explaining software concepts.

Of course, the analogies are a tool used extensively outside the software profession. The difference I see, though, is that the software professionals use it to explain things within the community, rather than for popularizing their subjects outside the community. Doctors use rarely analogies to explain medical stuff to their colleagues. Physicists use analogies extensively to explain abstract theories like the relativity or quantum mechanics to outsiders, but not when talking to each other (Schrödinger's cat excluded).

Refactoring or re-writing of software components is one of the activities most difficult to explain within the software development community. There is a quasi-consensus that it is good, although only few people can explain why.

Although refactoring and re-writing are an obvious case of re-work many people go to great lengths in explaining the opposite, because lean teaches us that re-work is bad. This is where I think that the weakness of the analogies starts to show. The analogies break if you try to extend them too much. Lean concepts come from the product creation activities in the auto industry. While there is a lot in common between product creation process for cars and software, refactoring might be one of those cases where the analogy breaks.

Craig Larman has another analogy for the refactoring: it is like pruning your garden. In both you cut dead material to allow the growth of fresh branches. Do you think that this statement forces the analogy a bit? "Many new gardeners can't bear the idea of cutting back an entire plant, but this is tough love and your plants will thank you."

I have found that an analogy to crop rotation is a good way to explain the return on investment (ROI) for refactoring and re-writing. For example, product owners in Scrum can drive a high ROI features from a SW component for one iteration/release, but then they will have to accept lower ROI for the next period, because the component needs to be refactored. This is similar to planting clover after cereals or letting the plot rest. Usually you divide your software into multiple components, the same way farmers devide their farm into multiple lots. If you refactor often enough then you will not need to do it on all the components at the same time. You can drive your high ROI features from some of the components while not adding any features that touch components requiring refactoring.

The more you go without refactoring or re-writing the more painful it will be to do it. If you are lucky and smart you could find some low hanging fruits to pick during a massive refactoring. If not, you will just give a release for free to your customers. However, if you persist on not doing the refactoring at all, you will end up with a dead land that will turn into desert.

October 15, 2009

Software Testing Means Thorough Analysis

I am a software developer, so it might look strange to see that the subject of my first post is related to testing. However I feel that justice needs to be done. Not long ago I heard, yet again, someone claiming loudly that having the developers doing testing is a waste of money. The context: a discussion about agile software development in which the same person was saying that every member of an agile team must develop multiple skills and be ready to do more than one job if needed. Apparently everybody meant in this case only the testers.

I am a software developer who has worked for about 10 years in this industry. After the first five years I took some time off from programming and worked for almost two years as a tester. I really wanted to learn more about the trade and the better I became at testing, the more I learned about programming and programmers. I will not go into the discussion about why you need professional testers. Joel Spolsky has a full post on the subject on his blog. I will not even share with you how exciting and fun it is to work as a tester. Harry Robinson does it better than I am able to.

The one thing I will do is to argue that good testing is one of the most thorough analysis activities in a software project. It would be enough to mention the two questions that lay the foundation of any decent software testing: How does the software work? and How should the software work? Those are already fundamental analysis questions. If you argue that the answers are already provided by the designated analysts, architects and programmers, I will use James Bach's words: "Programmers have no clue what the boundaries are".

Once a good tester has passed over the initial questions she will bring out the big guns: Why does the software behave like it does? and Why should the software behave like it does?. Let me explain what could fall under those questions:
  • Is the observed behavior intentional or accidental? The test may pass, but when you take a closer look at the logs you notice that it is merely an accident. Changing the input just a little or repeating the test several times will make the application go wild.

  • Does the software follow common sense? Sometimes the software behavior follows the specifications, but not the common sense. A radio button used as a label, a network protocol using 10 pairs of messages for the initial handshake or one that redefines every message of a well known specification are few wild examples, slightly exaggerated, but nevertheless, derived from reality.

  • Sometimes nonsense behavior is explained by programmers using this innocent phrase: "It has always worked this way". A good tester will challenge such thin explanations.

  • A variation of the previous case is the "guru specification". I worked once with a tester who was trying to understand a nonsensical part of the functional specification. He found out that the section was written by The Architect. He went to the architect and started questioning the spec: why should this part work like this, why should that part work like that. It did not take long and the architect got annoyed and answered: "It is like this because I said so." The tester replied: "I see... But now, seriously, dude, why should it work like this?".
When the why questions are finished the difficult part begins: How does the software really work? and How should the software really work? Only few projects reach this phase and not many testers are able to perform it. The usability testing is one example from this category, but it is only a scratch on the surface.

Can and should the programmers do testing? All the analysis skills needed for testing are also an excellent asset for a developer. However, based on my personal experience so far, the testing environment seems to encourage more the non-conformism and the ability to challenge established views. I will not claim that all programmers can become good testers. But all the programmers would learn a lot more about the software and about their colleagues if they performed testing often. 

By the way, if you agree with my reasoning, all of the above are more arguments for doing the testing early. You do not want your perhaps best analysis phase to be done late in the project, do you?

As for the programmers shouting "I should and will not do any testing because it would be waste of my precious programmer value", letting them write code might prove to be the real waste of money at the end of the day.