Category Archives: Process

Your Annual Plan Is Already Wrong

Your year-at-a-time plans and goals are already wrong, but if you know this about them, they are still worth making, provided you don’t take them too seriously.

Another January is upon us, bringing us into the season of annual planning exercises. Annual plans are the organizational equivalent of New Year’s resolutions – we feel obligated to make them, even though we are unlikely to keep them.

In most organizations, there is little practical difference between the end of December and the beginning of January, and so there are few objective grounds for over-weighting this part of the year as a time for course corrections and new ventures. Yet, the unblemished calendar invites us, like fresh snow, to create something remarkable upon its canvas. We aspire to high art, and this seemingly demands a plan that will keep us from wandering around in circles, doubling back over our tracks, and generally making a mess of the yard.

My friends, the mess is unavoidable. Success is rarely achieved in a straight line.

We should no longer be surprised by the annual objectives which seemed so central in January and were scarcely relevant by June. We can hardly doubt that our signature accomplishments in a given year were often unimagined at the start of that year. If we are attentive and realistic, we recognize that these discrepancies between planned and actual are as much the norm as the exception.

Humans are poor planners and feeble fortune tellers, and pretending otherwise amounts to damaging obstinacy. Let us instead embrace our obvious limitations and our well-documented fallibility, so that we can prepare and act accordingly. We are equipped with lanterns, not flashlights. We can see what is close at hand, but most of the path before us will remain dark until we are right on top of it.

The rise of agile software development methods is an outgrowth of this recognition. Agilists accept that they cannot see the future very well. They don’t try to look very far ahead because people simply aren’t very good at doing so and because changing circumstances often invalidate plans made on large time scales. Instead, agilists work in small iterations, experimenting and successively refining until they have made their way to their goal. With each iteration of their output, they evaluate what they have done so far and what the next best thing to do would be. They keep long range goals in mind as a guiding star, but they navigate in the moment by more immediate and accessible beacons. If they make a mistake, they won’t go very far before giving themselves a chance to correct it and get back on course.

The corollary lessons for annual planning exercises, including personal goals, are clear. If your planning cycle is annual, then your plan is already wrong, not because of its established content but because it doesn’t contain realistic mechanisms for error correction and adjusting to change. The winds of the world are abrupt and fickle, and we can’t keep sailing in the right direction by changing our tack on a merely annual basis.

Instead of planning annually, organizations should:

  • plan for multiple time horizons, such as “within three months,” “within six months,” “within a year,” and so on
  • make near-term plans the most detailed (because these cover the ground we can actually see well)
  • make long-term plans more directional and less defined
  • revisit plans regularly, multiple times within a year, for review and adjustment
  • accept that plans sometimes turn out to be wrong, and that there is no shame in adjusting to new information or realizations

The annual plan is a fossil, left over from the days in which things moved slower and we knew less about ourselves. It’s time for us to evolve into planning cycles that are more natural, realistic, and credible.

Meaty Methods and Novel Names

Measure the substance of a process by what is left over when you remove its terminology.

Processes, methodologies, and frameworks often exhibit a compulsion to invent novel names for ordinary concepts, and to create highly specialized senses of everyday words. Examples include “Voice of the Customer” in Six Sigma and “fast tracking” in PMI parlance. Sometimes, there are benefits to having such names, but these are mainly  associated with efficiencies in explaining the process/method/framework itself. Even if a specialized vocabulary would help me communicate with other members of the cognoscenti, I’m usually communicating with a mixed audience, and so my grasp of the vocabulary atrophies for want of practice. (When I mentioned “fast tracking” a few sentences back, I had to go look up the term to remind myself of its specific meaning – i.e., doing things in parallel – in the PMI world.)

Outside of the phase in which somebody is learning about the process/method/framework, I’m not sure there’s much to be gained from wrapping special terminology around common sense concepts. In large part, I judge the substance of a process/method/framework (not its correctness or applicability – those are different matters) in terms of what’s offered outside of the vocabulary. If I didn’t have to use your terminology (hint: I probably won’t), how much is left to learn? That’s the meat.

The high-level DMAIC (Define / Measure / Analyze / Improve / Control) process of Six Sigma fame is not very meaty. It’s more like terminological shorthand for how an intelligent and methodical person would go about solving a problem in a serious way. You don’t learn it so much as recognize what the Six Sigma folks want you to call it. In contrast, the idea that I should focus on process control via standard deviations and have to measure conscientiously in order to do so – a key tenet of Six Sigma that fills out the intended implementation of DMAIC – is pretty meaty.

How to Define a Software Development Process

Your software development process should be contextual, likable, forceful, and readable, and you should get it this way by iterating over it.

Defining a development process is hard

So, you want to define a software development process for your organization. Good luck. If it isn’t hard work, then you’re probably not doing it right, because you ought to take many factors into account, and those factors tend to be very organization-specific. Although you can gather lots of great ideas from books and process gurus, your ideal process will be so influenced by environmental factors that no text or outsider can tell you what it should be. At the end of the day, all of the tough calls will be on you. Also, since the development process will affect every project and stakeholder in your organization, it’s a high-stakes proposition, and something you probably can’t afford to screw up.

What should the process cover?

Actually, figuring out what kinds of things the process should govern / specify is an essential part of the process definition. (No free lunch here.) It’s not just a matter of figuring out how to answer a series of questions in an organization-specific way, which would be hard enough. You also have to figure out which questions are worth answering in the context of your organization and its goals. A larger development shop may dicker productively over the standard set of documents that will be dissected in design reviews. A one-horse development shop probably doesn’t have the same concern, as there’s nobody to do such reviews.

So, what kinds of things might a development process address? The following rough categories would make for at least a decent start:

  • Activities that will happen: The “coding” part is more or less a given, but are you going to do ROI analyses, design reviews, independent testing, user acceptance testing, management approvals, formal training, etc?
  • Order in which activities will happen: Are you going to be more waterfall-ish, with code mostly following an requirements gathering and design, or are you going to be more agile-ish, and allow your code to grow up alongside your requirements and design?
  • Conditions on activities:  What circumstances require or preclude particular activities? What approvals or phase gates govern movement from one activity to another?
  • Deliverables produced: What kinds of products do you need to create in addition to the software? Design documents? Manuals?
  • Enforcement mechanisms: How will you ensure that the requirements of the process are being respected? Will you do process reviews? Audits?

Characteristics of a good process

Consonant with the above, it’s hard to offer much advice on what your process should look like, because your process needs to be tailored to its context. However, here are some meta-level suggestions about software process definition in general:

1) Software practitioners (engineers, project managers, etc) should like the process. They don’t need to take it with them to Disney World, but they shouldn’t be bristling and gritting their teeth as they struggle under its yoke. If practitioners don’t like the process, it won’t be used as intended, and will hence be a failure. Even if a disliked process is mandatory, people will figure out ways to achieve superficial compliance, even though they are really doing things some other way. Accordingly, the process should be designed mostly by the practitioners themselves. This will presumably lead to a process that the practitioners think is valuable, and will greatly increase the likelihood of broad buy-in. Management can and should levy requirements on the process, but the practitioners should be allowed to satisfy those requirements how they see fit.

2) The process must be forceful, which is to say that it must contain mostly shalls – things people have to do (if they want to avoid a beatin’), as opposed to shoulds – things that are recommended. When a group is trying to define a process, members will often weigh in with suggestions, guidelines, and best practices that represent their views on how software engineering should be done. This may seem useful at first, but any group of experienced professionals can say a great deal about how things should be done. Before long, you’ll find yourself writing a software engineering book, filled with a big pile of shoulds that individuals will happily ignore whenever timelines get tight or when they happen to disagree with the recommendation. The payoff of this book will be too small to justify the time it takes to write it. Instead, your process should focus on what is mandatory / required, even if it is only required in certain circumstances. There is value in a big pile of shoulds, but it belongs in knowledge sharing activities (lunch and learn stuff) as opposed to a process definition.

3) The process must be extremely concise (as minimal as possible) and readable. A process that expounds on its topics in an academic and/or lengthy manner will go unread, and a process that goes unread is a process soon to be dead.

4) The process should be developed iteratively, because iteration works for everything. Don’t wait until you have the whole thing nailed down before you deliver something. Rather, start piloting elements of the nascent process as soon as possible, so that you can gain some real-world experience and adjust accordingly. Particularly if your process levies new requirements on people, you’ll want to see where people are tempted to cut corners, and if they still agree in the heat of battle to the same things they agreed to on the whiteboard.

If the process you develop is sensitive to its context, respected by the people who will use it, slanted toward mandatory practices, and right to the point, it will likely be a success. Further, you are more likely to define such a process by iterating over it early and often.

Iteration Works for Everything

Everybody knows the benefits of iteration for developing software, but iteration should be practiced for developing just about everything.

If you’re working in software development, professional literacy / competence demands some acquaintace with iterative development techniques. From intern to architect, it has been drilled into us all – iterate, iterate, iterate. Most of us can and do carry the tune – we deride the linear ways of waterfall (at least explicitly), and preach the gospel of iteration.

So, why do we iterate? I think these are the best reasons:

  • People often don’t know what they want – It takes seeing some kind of semi-realistic product before people realize what they do and don’t want from a system. Iterative development gives people slices of functionality that they can test-drive, thereby enabling the feedback needed for future iterations.
  • People often miscommunicate – Even when they know what they want, people aren’t always so good at expressing it. Further, even if well-expressed, a requirement might be easily misunderstood. Iteration allows people to look at something in progress and say very simply and specifically what is right or wrong about it.
  • Requirements change  – Targets have a way of moving, even the ones that have already been hit. Iteration gives you a way to continually re-validate requirements and to keep pace with fluid demands.

So, we all know the above, and countenance these truths in our development practices. Yet, for some odd reason, most of us reserve the valuable practice of iteration for creating code. When we need to produce user guides, proposals, standards, COTS evaluations, slide decks, and any other non-software deliverables, we revert to the dark ages of engineering, hoarding our labors and polishing them at length, waiting for the right moment at which to unleash them upon the world.

Your non-software deliverables are just as susceptible to miscommunication, moving targets, and imperfect requirements as your code is. As such, you should iterate over those as well. Don’t wait until you have a complete and thorough document / presentation / proposal before throwing your cards on the table. Instead, get something (e.g., a table of contents, outline, or synopsis) out to other stakeholders as soon as you can, and let the magic of iteration help you to make these products as good as they can be.

Iteration works for everything.

Does Your Process Scale?

Processes, like systems, have to scale both vertically and horizontally.

We know that the systems we build have to perform under pressure. So, if we’re smart (and not so burdened that we are just barely getting things out the door), we do load testing, concurrency testing, scalability testing and all the other things that help us to ensure that our products can step confidently out of the development ecosystem and handle the workaday rigors of production use.

What about our processes? Often, we test a new process or methodology in isolation, on some low-risk demo project . We see how things work. We fidget, tweak, and tune. Eventually, we add enough and subtract enough get the process humming along, until it’s just right (or at least good enough). Then, we say that we nailed down our process.

However, sometimes we forget that our process has to scale, just like our systems. Can the project management process you defined work for seven simultaneous projects all managed by the same PM? If it consumed most of the PM’s time on your simple demo project, what is going to happen to it when you try to apply it on a program or department level scale? What about those design artifacts you created for the demo project? It was nice to have class diagrams, sequence diagrams, stakeholder profiles and data dictionaries for the demo project, but can you sustain that level of documentation for your marquee product, the one that keeps eight developers continuously occupied?

Your systems have to scale, but your process does too. Use one that you can sustain consistently across the breadth of your concerns, given your available resources. Just because something worked on a demo project does not mean it will survive a collision with the rest of the stuff on your plate. Organizations often design processes for vertical scalability (adjusting the rigor and demands to the size of the effort), but fewer think about scaling horizontally (ensuring that you can still apply the process to many efforts of a similar size).