June 18, 2013

Centralized and distributed command


"Why in hell can't the Army do it if the Marines can. 
 They are the same kind of men; why can't they be like Marines?"
Gen. John J. Pershing

The special forces of different military organizations and the US Marine Corps, in particular, have been the darlings of the Agile software development movement. During the informal discussions at the agile conferences and seminars the marines are mentioned invariably as an example for software development teams. And for good reason that is. The Marines are trained to operate independently in uncertain, complex situations, so they are the ideal model of self-organizing teams. They are the first to be deployed in conflicts or humanitarian crises and are the ones responsible for the high profile and highly dangerous operations run in hostile territories.

With so many good arguments for being like the marines, why aren't the armies more like the marines? This post includes a naive (I have no military training) attempt to answer the question.

First, the army is nowadays more like the marines. The professional army units of today are trained to operate in uncertain, complex situations. Comparing a professional military group, like the marines, with conscription soldiers who go through limited training is not a fair comparison.

Second, the job of other branches of the armed forces is of a different nature than the marines'. The marines are a lightweight force, designed to act like a spearhead in case of conflict. Apart from fighting the enemy, the task of the army, navy or the air forces is to move large number of people and amount of resources from point A to point B, over land, sea or air and prevent the enemy from doing the same. The people and equipment have to arrive at the same place and at the same time, otherwise they will be an easy target for the enemy.

In the battle of Normandy the Allies deployed more than 150 000 troops in two days. In the Gulf War the US alone sent more than 500 000 soldiers and the campaign to liberate Kuwait included more than 150 000 soldiers and 1500 tanks. The synchronized deployment of large groups of people looks like the natural candidate for centralized command structures. The solution needs to be coherent across the whole group and that is easier to ensure with a centralized plan. Central planning has made the architecture of Paris more coherent that the organically grown Rome. Working for a company that resulted from several mergers and acquisitions, I could see first hand how much effort is required in order to integrate different IT solutions and cultures into one business. Local coherence does not translate into global coherence. While centralized command may give you global coherence, it does not guarantee success. GalipoliThe Charge of the Light Brigade or the Battle of the Somme are atrocious example of catastrophic failures of centralized command structures. A centralized command structure is as good as the people who are in the command positions.

Is there an alternative to centralized command, when organizing large groups of people? The March on Washington used a combination of centralized and distributed command structure to bring more than 200 000 people to Washington DC in 1963. 1989 Eastern Europe revolutions and the recent Arab Spring are examples of loosely coupled structures that organized massive protests. Ideas seem to be a powerful catalyst for human cohesion. In all these examples, however, the distributed structures have disintegrated as soon as the immediate goal had been achieved. The time needed to achieve a common, coherent solution seems to be longer for distributed organizations. Organizing an event like the March on Washington would be a walk in the park for a military organization, although the result might not be as inspirational as the original.

The tension between centralized and distributed control has been constantly present in the history of government. Although human societies have experienced both centralized and decentralized forms of government for millennia, the discussion about which form is better or what is a good balance continues to our days. Some activities are better handled in a central way (e.g. defining air corridors) while others are better handled at local level (e.g. what type of flowers to plant in a park). To complicate matters, in domains like the health system the right balance is still under heavy debate.

Organizing activities that involve large groups of people is a problem that needs to be solved in many software development organizations. The debate over centralized vs. distributed management has been split many times over ideological lines: "centralized is good, distributed is bad" or vice-versa. The examples from other human areas show that both approaches have their applications. It requires experimentation and rather frequent changes in order to find the right place and balance for one or the other and to find the right people who can drive such a dynamic structure. However, experiments with command structures have a powerful obstacle to overcome: once people get into a position of power they tend to consolidate their position and try to extend their power, rather than explore paths that could diminish it, even temporarily.

March 19, 2013

Users, stories and user stories


If I had asked people what they wanted, they would have said faster horses.
Henry Ford


The quote that starts this post, attributed to Henry Ford, underlines a fundamental problem in product development: if what they need/want did not exist or they had not used it before, customers have no means to express their needs.

Yet, the first time I had to figure out what kind of product the users of my software wanted, the most frequent advice I got from the senior engineers was: "Go to your customers and ask them: 'What do you want?' And then record the answers in the requirement document." Armed with this advice I started talking with the people who would use my software. The discussions were an interesting and frustrating exercise. There was usually a core element of the software we were developing that was pretty obvious and on which it was easy to agree about the functionality. To other questions, like "Should this function be synchronous or asynchronous", the users would just say "We don't know yet.", even after spending some time to think about it. Or sometimes they would prioritize the requirements in a way that would have clearly hurt their work.

The idea that people cannot express their needs and desires is not specific to product development. In fact, this idea found a more fertile ground in psychology, where Sigmund Freud developed the new field of psychoanalysis. People like Edward Bernays, one of Freud's nephews, started to use the idea in advertising and propaganda. Looking back through the history of ideas we can see that everything that challenges the common sense of the day takes a long time before it becomes main stream. The heliocentric theory and scurvy treatment took centuries to establish themselves. Customer cluelessness may not be such a radical idea, but it still took time for the software development community to wrap their minds around it. Anecdotal evidence from the enterprise software development community or public sector projects would suggest that the customer is still the sacred source of requirements in those areas. On the other hand, the software usability experts widely accept that the focus of their work should be on observing the users’ actual behaviour, rather than on what they tell about the product.

Maybe the software development is a young industry. Let's see how the discovery of customer cluelessness has recently changed a thousands years old human endeavour, the food industry. For that I will rely heavily on Malcolm Gladwell's stories. The first story is about the journey of Howard Moskowitz, the man who brought unexpected pleasures to spaghetti lovers. In the 1970s "assumption number one in the food industry", says Gladwell, "used to be that the way to find out what people want is to ask them." Moskowitz had other ideas. "The mind knows not what the tongue wants", Moskowitz told Gladwell and that was the approach that he took when Campbell's hired him to save their spaghetti sauce business. Instead of gathering people and asking them how they like their spaghetti sauce Moskowitz created recipe variants based on each parameters of the spaghetti sauce that he could think about: sweetness, thickness, spiciness, cost, etc. He settled on 45 variants and took those on the road for people to taste. After processing the data he made a discovery that startled the people at Campbell's. A third of the people liked a whole new category of spaghetti sauce, the extra chunky, that no manufacturer was producing. Campbell's went on to make a lot of money with their Prego extra chunky spaghetti sauce and Moskowitz went on to create new categories of spaghetti sauce when he was was eventually hired by Campbell's main competitor, RagĂș.

To end this section about traditional requirements gathering in IT, I turn to Dave Snowden's list of possible problems that arise from system analysts interviewing the users:
  1. In general users don't know what they want until they get it, then they want something different
  2. This in part because the interview process can only really explore what they don't like abut the current state of affairs, a sort of need defined by negation of the present
  3. Systems analysts like any interviewer start to form subconscious hypothesis after a fairly small number of interviews and then only pay attention to things that match those hypotheses in subsequent interviews
  4. Outliers, or odd demands are often ignored, while these may present some of the best opportunities of radical innovation and improvement
  5. Most radical new uses of technology are discovered through use, not through request and more often than not accidentally (think facebook, twitter etc. etc)
  6. people only know what they know when they need to know it, it requires a contextual trigger which cannot be achieved in an interview
  7. Early stage problems in roll out are easily ignored, or more frequently not reported, as they seem minor but then they build and result in major setbacks.

If there are fundamental limits on the user knowledge about new products how have the product developers worked around the issue?

I have already implied one method above: create several variants of your product and ask the users to test them. That is what the usability studies do and that is what Howard Moskowitz did. In his book Blink, Malcolm Gladwell tells a few stories that should make a product developer careful when testing product variants:
  • The way you structure the test influences the results. The Pepsi Challenge, a test in which soft drink tasters consistently chose Pepsi over Coca-Cola, was a "sip test" and Gladwell shows that people generally prefer the sweeter drink in a sip test. However, the results are different when they drink a whole can or when they drink a larger quantity over a longer period of time.
  • Faced with too many choices customers are not able to make a decision. Using 6 varieties of jam in a tasting corner, lead to more sales jam than when 24 varieties were used. In his spaghetti sauce tasting experiments, Howard Moskowitz asked people to eat between eight and ten small bowls of different spaghetti sauces, rather than taste all 45 varieties that he had created.
  • Asking non-experts to explain why they prefer a certain product changes people's preferences. When taste experts and non-experts were asked to rank strawberry jams, they  produced very similar results if non-experts did not have to explain their choice. However non-expert ranking was completely messed up when they had to explain their choice. Remember the usability study principle: observe what the users do, not what they tell you about the product!
There are at least a couple of other limitations for this method:
  • You need to have the minimum knowledge about the product needed to produce raw variants or mock ups.
  • The usability testing approach requires observation of the user behavior, which might sometimes not be possible due to physical constraints or privacy concerns. 
    How about the cases when the product developers do not have the right information to build product variants? Two groups, starting from different directions, came up with solutions that share some similarities.

    The first group, lead by Clayton Christensen, started from the business and marketing theory. In The Innovator's Solution, Christensen argues that, while there is correlation, there is no proven causality between customer demographics and product sales. "The fact that you're 18 to 35 years old with a college degree does not cause you to buy a product". People "hire" products that helps them to do a "job". Christensen's classic example is the story of a fast food restaurant trying to improve its milk shake sales.
    Its marketers first defined the market segment by product—milk shakes—and then segmented it further by profiling the demographic and personality characteristics of those customers who frequently bought milk shakes. Next, they invited people who fit this profile to evaluate whether making the shakes thicker, more chocolaty, cheaper, or chunkier would satisfy them better. The panelists gave clear feedback, but the consequent improvements to the product had no impact on sales.
    The researcher from Christensen's group (JTBD group) used a different approach. Instead of focusing on the product parameters he spent his time trying to establish the context in which the milk shakes were bought: the time of day, what else the customers bought, etc. At this step the researcher already observed an interesting pattern:
    He was surprised to find that 40 percent of all milk shakes were purchased in the early morning. Most often, these early-morning customers were alone; they did not buy anything else; and they consumed their shakes in their cars.
    That pattern did not tell him yet what job the milk shakes were hired to do, so he interviewed early morning customers. The focus was again on the context in which the product was used rather than the product itself and his efforts were fully repaid when he figured out the pattern:
    Most bought it to do a similar job: They faced a long, boring commute and needed something to make the drive more interesting. They weren't yet hungry but knew that they would be by 10 a.m. [...] They were in a hurry, they were wearing work clothes, and they had (at most) one free hand.
    The key activities in this method are the data gathering for establishing the context and the customer interview process. In the interviews the subjects describe in details (as much as one can gather from a busy commuter early in the morning) the context of their decision to buy the product and the usage of the product. They are not asked for opinions about the product parameters, thus avoiding the problem mentioned earlier, of customers changing their choices when they have to explain them. A demonstration of the interview technique can be found from jobstobedone.org.

    In a discussion with Horace Dediu, from Asymco, Bob Moesta, one of the people who worked in the '90s with Clayton Christensen, summarizes the consumer interview process:
    The key is both getting the ethnography, of understanding what they [the customers] are doing and then getting them to tell stories and boil the essence of the story down and build a theory of why and how they consume.
    The second group (CE group), centered around Dave Snowden, from Cognitive Edge, has its roots in the study of organizations, "drawing on anthropology, neuroscience and complex adaptive systems theory". Their techniques for requirement discovery are also based on data collection through user narratives and then processing for pattern detection, with context awareness playing a very important role in the process. There are differences, though, in both the data collection and in the pattern detection methods. Concerned about the system analysts introducing their own bias in the data collection, the CE focuses on getting both the data and its interpretation directly from the users through methods like the Anecdote circles, Future, Backwards or Archetype Extraction.

    Another difference between the methods used by the two teams is the CE group's approach on the scaling of the methods. Processing the stories manually is a problem for scaling the narrative methods. The solution offered by Cognitive Edge is a proprietary tool, called SenseMaker®, which can be used for collecting and indexing of a large number of stories. The stories are indexed by their authors, based on a reference framework created before the collection of the stories. A pre-exisisting reference framework means that the analyst bias is not completely removed, but its effects is reduced through a design that is broad enough to capture conflicting views and smart enough to avoid leading the users to the "correct" answer. The indexed data is then used for detecting patterns and the stories provide the context for interpreting the patterns. The CE methods have not been developed specifically for product requirements gathering, so one can find diverse examples of their applications from Cognitive Edge articles and case studies pages.

    The role of the analyst is an important difference between the JTBD and CE methods. Since both methods have the support of successful cases, I can only conclude that the role of the analyst in the requirements gathering depends on the context. The Cynefin framework developed by Dave Snowden might provide the clue to how to define the analyst's role. If the problem is in the complicated domain, expert analysis will be a very efficient tool for explaining the patterns. If the problem is in the complex domain, allowing the patterns to emerge will be the only solution. However, depending on the experts and their expertise, problems might appear to be complex to some experts while others will consider them complicated. The amount and the quality of the data can also influence how a problem is viewed.

    The JTBD and CE methods help in discovering problems, needs or opportunities for product development. There is a long road from discovery to end product and it might require several product development and user feedback iterations, a process that Dave Snowden is describing as co-evolution.

    For software developers the use of narrative approaches sounds like good news. User stories are a requirements capture method that has gained a lot of popularity lately. However, there are few problems in the way the user stories are captured by most software development teams and they are highlighted by Jim Coplien:

    What Alistair [Cockburn] originally meant by user story is something like the following:

    "Susan, who is a doctor and has two children is a shift worker in a hospital. She works different hours on weekends than during the week. She wants to set up her alarm to get up at the right time. Sometimes she has to work night shifts or two shifts a day. She wants to set her alarm for an entire week in time, because she knows her work schedule a week in time, so she could wake up at the right time to go to work."

    There is a user... and a story, hence the term user story.
    Compare this to “I, as a user, want to set up my alarm a week in advance."

    Not that I am complaining. Without the oversimplification of the user stories we would have never had the wonderful world of the Cat User Stories:


    Notes:
    1. This post owes a lot to the days I have spent in Feb. 2013 in Amsterdam, attending the Practitioner Foundation course organized by Cognitive Edge and taught by Tony Quinlan. Scattered questions and answers have started to converge for me after three days around stimulating subjects and smart people.
    2. Malcolm Gladwell is as good a speaker as he is a writer, so it's worth listening to him telling the stories. Check this and this.

    March 24, 2012

    The unscripted collective art of software engineering



    Are we human or are we dancer?
    My sign is vital, my hands are cold
    And I'm on my knees looking for the answer
    Are we human or are we dancer?
    "Human" by "The Killers"

    The history of software development places the roots
    of modern software in the aftermath of WWII, making it a young trade at the scale of human history. Like the aviation or car industries before it, software started as an activity carried out by a few pioneers, individually or in small groups. A typical child of the 20th century, software turned into a mass production activity as soon as its value became clear and the advances in computer hardware and programming languages allowed it.

    As an industry, the software development was faced with and interesting question: "How should the teams of people working on software be organized?". I started my career in the software industry at the end of '90s. The prevailing model at the time was the waterfall model, sometimes drawn in the fancy shape of the V-model. The linear flow of activities in the model reminds me of the successful industrial manufacturing, with the assembly line as the centerpiece. The software is passed forward, from one end of the line to the other, in a well organized fashion. The choice of production model should not come as a surprise. The assembly line had been the most successful way of organizing mass production, so it was probably viewed as the best practice of the day.

    The limitations of the waterfall model have lead to the emergence of a new style of software development: "one that breaks down hierarchy, that features dynamic social structures and communication paths, and that values immediacy. This [...] style often bears the label “agile,” but that is just one of many characterizations of a broad new way of developing software [...]." The new style is iterative, with organizations today using 1-6 weeks iteration cycles and continuous integration of software. The natural consequence of such a short iteration cycle is that the linearity and sequencing are completely abandoned: testing can start before there is any code (e.g. test driven development), design can happen after the code is already functional (refactoring), requirements may be discovered at any point in time. If I had to pick a single most important contribution of the agile SW development community, the departure of SW creation from the assembly line model would be my choice.

    Why did the assembly line organization not work for SW creation? The best hypothesis I have seen is that the SW organizations are closer to self-organizing systems than to assembly lines. In "Agile Software Development with Scrum" Ken Schwaber argues that the software activities fall into the realm of complex systems (as opposed to simple or chaotic systems). Complexity science, the discipline that studies self-organizing systems, would then explain very well why a simplistic linear model fails when many teams are involved in the creation of the SW (1). Although the complexity theory would explain why the waterfall SW organizations cannot succeed, it does not give much guidance on the practical matters of how to deal with SW teams in their daily work.

    Understanding what the software development is could be one way to figure out how the software developers should approach their work. As a graduate of an engineering school I am sensitive to the arguments of David L. Parnas and Steve McConell, who favor an engineering approach for the development of commercial software. Parnas compares the separation of the software engineering and computer science with the similar split of electrical engineering and physics. "An Engineer", says Parnas "is a professional who is held responsible for producing products that are fit for use". That type of responsibility requires different training and work methods than what scientists need.

    While the engineering approach to software appeals to me, I would point out that not all engineering spun off of science. The civil engineering activities developed out of necessity. The core of the mathematics used in construction design had been developed by the ancient Egyptians and Greeks, but the modern science needed by civil engineers emerged in the 17th century with Isaac Newton's classical mechanics. Later, advances in physics and chemistry have provided engineers with new or improved construction materials, better heat insulation and illumination. However, the lack of modern science has not prevented people from using empirical methods to build impressive constructions during the ancient times or the Middle Age. After the science became advanced enough, civil engineering also morphed into a modern engineering activity, i.e. "the application of scientific and mathematical principals toward practical ends". Re-basing the civil engineering on science has lead to an interesting phenomenon: a distinction between architecture and civil engineering started to develop. I think this is no coincidence. There is a considerable overlap between architecture and construction engineering, but the architects want to make a clear statement that apart from science and mathematics there is also art needed when designing spaces inhabited by people.

    I believe that the software today is still in a mixed, undifferentiated state. Although voices like Parnas' and McConell's get louder, there is still a great deal of confusion between software engineering and computer science. In addition, there is no formal recognition yet with respect to the empirical and artistic aspects of software development. I choose to describe the software development as an unscripted collective engineering art.

    Art. Let me first clarify that art in this context may refer to either art or craft. The difference between the two does not affect the rest of the arguments, so I will use art and craft interchangeably. 

    There have been already pledges to include the software among the artful activities. In The Pragmatic Programmer. From Journeyman to Master, Andrew Hunt and David Thomas make extensive arguments in the same direction: "Programming is a craft. [...] As a programmer you are part listener, part adviser, part interpreter and part dictator. You try to capture elusive requirements and find a way of expressing them so that a mere machine can do them justice."  Fred Brooks attributes much of the joy of the software development to the artistic freedom programmers have: "The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures."

    Jonathan Wallace considers that the software development is completely dominated by art and it should free itself from the engineering approach. That goes one step too far for my taste. The software products solve practical problems, so I prefer to keep the engineering sense of "fit for purpose" in the software. Much like the architecture, I see the software as balance between art and engineering. It is a similar approach to what James Dyson advocates for industrial design and uses for building vacuum cleaners, washing machines or hand dryers. It is the "intersection of liberal arts and technology" that Steve Jobs has nurtured at Apple.

    August 27, 2011

    It Is All in Your Mind (How Soft Is the Software?)

    The trigger of this post came from an interesting discussion I had with Bas Vodde during a Scrum master training session that he held in Helsinki. I had a very weak voice after a bad cold, so with some effort, I told him the story of the most agile project that I had ever heard of (*). It was about building apartment buildings in Cairo, Egypt. The development companies make the plans, present them to potential customers and when the first floor is bought they start building the apartments and leave rooftop ready for building the 2nd floor. When the 2nd floor is sold they start building it and so on and so forth. I understood that the apartment needs in Cairo are really big, so people are willing to move into a place under construction. The weather also allows for the buildings to stay unfinished for a long time. 

    Now, if you build like this you obviously need a strong foundation to sustain 5, 10 or 20 floors and you must have that in place from the beginning. You cannot change your mind after the first 3 floors and say "I'm gonna build a 20 story building even though my foundation was only for 5".

    My question to Bas was: how can we actually build a strong enough foundation to last for several releases of a software product. "You don't have to", was Bas's reply. The software is not like a concrete and steel building. The software is soft, you can change it. That answer has gotten me wondering: how soft is the software? 

    The Linux kernel and Eclipse are notable examples of products with an evolution path that did not require breaking everything down. However, I also have good reasons to be skeptical. I have had the privilege to contribute to successful software products and I have friends who have worked with such products. In many cases, entire releases were dominated by refactoring, rewriting, re-architecting. Microsoft did that with several releases of their operating system and of Internet Explorer, although they apparently managed to avoid it in the Office suite. Netscape and then Mozilla foundation did it with the Communicator/Firefox browsers. Nokia, the company where I worked until recently, is giving up on not one, but two software platforms (Symbian and Meego) just because all the attempts to re-engineer them were not bringing fast enough results. Being smart does not make you immune, but it might help in selling such a release by adding a few fruity treats on the side.

    Despite all the evidence, the myth of the easiness of software changes has become pervasive in the software development community. This is my attempt at explaining how this legend developed.
    • The most obvious and irrational reason for the belief may be the term itself: software. Coined by John Tukey in the '50s, the term is defined in opposition to the hardware. Language and beliefs are linked together. The saturated repetition of statements as a way to create beliefs has been used already in the political arena or as a brain-washing technique. I do not suspect any malicious intent behind the use of the word "software", but its unchallenged, widespread usage could create the belief that the software is, you know... soft. 
    • The success of the software upgrade practice. Companies big and small, producing software for personal computers, phones, services, all can upgrade their software and deliver corrections. Sometimes it can be done remotely, without a human ever touching the hardware device. Bringing corrections and updates to a software system already in use is easier and cheaper and many times less messy than upgrading a house or recalling cars for correcting defects.

      Does it actually prove that changing software is easy? It only proves that deploying changes in software is easier than in other industries, but it says nothing about how easy it is to do the changes. 
    • The software is a product of the mind. The software lives in a computer memory, but it gets created in the mind of the software developers. It is a mind model, expressed in certain programming language, of a particular job that needs to be solved. Without any physical limitations you can bend and stretch the model any way you like, can't you?

      A small software project has at least few thousand lines of code. The typical products that I helped building contained interdependent components that amounted to millions of lines of code. At that scale your mind can play funny tricks. The job of the mind is to make the world around comprehensible, which leads to a lot of simplifications. You will miss many details, which will result in mistakes. Most developers understand it at a rational level, but when someone, say a tester, tells them that their mind has failed them they will instinctively become defensive. Maybe because admitting that your mind can produce the wrong result is close to admitting insanity. It is personal.

      It might also be one of the reasons why history is full of stories of great people like Mahatma Gandhi, Martin Luther King or Nelson Mandela who led a long fight against corrupted mind models. Those are examples on a whole different level, but they show that people can cling to even the most unreasonable ideas for their whole life. It was more comfortable to patch for centuries a model of the Universe centered around a flat Earth, rather than use the evidence to create a new model. Mind models do not bend easily, even when faced with overwhelming evidence.
    Although not everything I have written was soft, I have also built enough soft software myself to bare witness that such a feat is possible. It is really hard to build soft software (**). It requires a lot of discipline and it initially takes more time than writing the spaghetti version. And there will always be temptations not to do it. Ken Schwaber describes the way how the software ends up stone wall hard in his talk at Agile2006. I have witnessed myself and heard from other people about projects that had been ruined in the exact same manner described by Schwaber.
    Is there a recipe for developing soft software? In the world of the magical "5 things" that you need for success, that would be: hire good engineers, provide them with good business guidence, provide them with a nice work environment and then let them do their job. Easy, isn't it?

    -------------------------
    (*) Mind you, this is just a story, do not take it as a fact.
    (**) The reasons why you would want to build soft software are explained nicely by Joel Spolsky in a post called Things You Should Never Do.



    October 16, 2009

    Refactoring analogies

    One of the interesting sides of the software development being a relatively young profession is the lack of established mental models for different activities occurring in the industry. one of the effects is the use of the analogies to other human activities as a way of explaining software concepts.

    Of course, the analogies are a tool used extensively outside the software profession. The difference I see, though, is that the software professionals use it to explain things within the community, rather than for popularizing their subjects outside the community. Doctors use rarely analogies to explain medical stuff to their colleagues. Physicists use analogies extensively to explain abstract theories like the relativity or quantum mechanics to outsiders, but not when talking to each other (Schrödinger's cat excluded).

    Refactoring or re-writing of software components is one of the activities most difficult to explain within the software development community. There is a quasi-consensus that it is good, although only few people can explain why.

    Although refactoring and re-writing are an obvious case of re-work many people go to great lengths in explaining the opposite, because lean teaches us that re-work is bad. This is where I think that the weakness of the analogies starts to show. The analogies break if you try to extend them too much. Lean concepts come from the product creation activities in the auto industry. While there is a lot in common between product creation process for cars and software, refactoring might be one of those cases where the analogy breaks.

    Craig Larman has another analogy for the refactoring: it is like pruning your garden. In both you cut dead material to allow the growth of fresh branches. Do you think that this statement forces the analogy a bit? "Many new gardeners can't bear the idea of cutting back an entire plant, but this is tough love and your plants will thank you."

    I have found that an analogy to crop rotation is a good way to explain the return on investment (ROI) for refactoring and re-writing. For example, product owners in Scrum can drive a high ROI features from a SW component for one iteration/release, but then they will have to accept lower ROI for the next period, because the component needs to be refactored. This is similar to planting clover after cereals or letting the plot rest. Usually you divide your software into multiple components, the same way farmers devide their farm into multiple lots. If you refactor often enough then you will not need to do it on all the components at the same time. You can drive your high ROI features from some of the components while not adding any features that touch components requiring refactoring.

    The more you go without refactoring or re-writing the more painful it will be to do it. If you are lucky and smart you could find some low hanging fruits to pick during a massive refactoring. If not, you will just give a release for free to your customers. However, if you persist on not doing the refactoring at all, you will end up with a dead land that will turn into desert.

    October 15, 2009

    Software Testing Means Thorough Analysis

    I am a software developer, so it might look strange to see that the subject of my first post is related to testing. However I feel that justice needs to be done. Not long ago I heard, yet again, someone claiming loudly that having the developers doing testing is a waste of money. The context: a discussion about agile software development in which the same person was saying that every member of an agile team must develop multiple skills and be ready to do more than one job if needed. Apparently everybody meant in this case only the testers.

    I am a software developer who has worked for about 10 years in this industry. After the first five years I took some time off from programming and worked for almost two years as a tester. I really wanted to learn more about the trade and the better I became at testing, the more I learned about programming and programmers. I will not go into the discussion about why you need professional testers. Joel Spolsky has a full post on the subject on his blog. I will not even share with you how exciting and fun it is to work as a tester. Harry Robinson does it better than I am able to.

    The one thing I will do is to argue that good testing is one of the most thorough analysis activities in a software project. It would be enough to mention the two questions that lay the foundation of any decent software testing: How does the software work? and How should the software work? Those are already fundamental analysis questions. If you argue that the answers are already provided by the designated analysts, architects and programmers, I will use James Bach's words: "Programmers have no clue what the boundaries are".

    Once a good tester has passed over the initial questions she will bring out the big guns: Why does the software behave like it does? and Why should the software behave like it does?. Let me explain what could fall under those questions:
    • Is the observed behavior intentional or accidental? The test may pass, but when you take a closer look at the logs you notice that it is merely an accident. Changing the input just a little or repeating the test several times will make the application go wild.

    • Does the software follow common sense? Sometimes the software behavior follows the specifications, but not the common sense. A radio button used as a label, a network protocol using 10 pairs of messages for the initial handshake or one that redefines every message of a well known specification are few wild examples, slightly exaggerated, but nevertheless, derived from reality.

    • Sometimes nonsense behavior is explained by programmers using this innocent phrase: "It has always worked this way". A good tester will challenge such thin explanations.

    • A variation of the previous case is the "guru specification". I worked once with a tester who was trying to understand a nonsensical part of the functional specification. He found out that the section was written by The Architect. He went to the architect and started questioning the spec: why should this part work like this, why should that part work like that. It did not take long and the architect got annoyed and answered: "It is like this because I said so." The tester replied: "I see... But now, seriously, dude, why should it work like this?".
    When the why questions are finished the difficult part begins: How does the software really work? and How should the software really work? Only few projects reach this phase and not many testers are able to perform it. The usability testing is one example from this category, but it is only a scratch on the surface.

    Can and should the programmers do testing? All the analysis skills needed for testing are also an excellent asset for a developer. However, based on my personal experience so far, the testing environment seems to encourage more the non-conformism and the ability to challenge established views. I will not claim that all programmers can become good testers. But all the programmers would learn a lot more about the software and about their colleagues if they performed testing often. 

    By the way, if you agree with my reasoning, all of the above are more arguments for doing the testing early. You do not want your perhaps best analysis phase to be done late in the project, do you?

    As for the programmers shouting "I should and will not do any testing because it would be waste of my precious programmer value", letting them write code might prove to be the real waste of money at the end of the day.