One-Page Project Plans

Because project management is a social, participative activity, the effectiveness of particular methods is inevitably bound up with the culture and aptitudes of the organization in which these methods will be employed. For an organization with little project management experience, earned value calculations don’t appear to be meaningfully different than the blithe “we’re 90 percent done” statuses that plague poorly run projects. Similarly, scope statements and change management plans are unlikely to find many admirers in organizations that rely heavily on oral communication and often fly by the seat of their pants.
 
In such organizations, trying to manage projects with the overt use of PMI-style techniques is probably futile and may even be counter-productive. Not every organization is ready for the PMI way of doing things, and if formal project management techniques are introduced clumsily, the organization is likely to recoil and dismiss these as a waste of time. Instead, it is often better to meet the organization where it is and begin with simple methods having broad, intuitive appeal. 

The above is an excerpt from a 2-part series I wrote for Gantthead under the title “When PMI is TMI.” In that series, I advocated using one-page project plans for organizations that have minimal project management culture in place, because more elaborate methods are likely to meet with insufficient support in such organizations. While a one-page plan is obviously pretty thin, in many cases it would be preferable to more ambitious approaches that can fall into rapid disuse and/or actually turn people against formal project management techniques.

The articles are linked below, and have generated some favorable comments among the members of the Gantthead community. Part II also includes a sample of a one-page project plan that I used a few years back. Unfortunately, Gantthead requires registration before you can read an article in full – sorry about that.

 When PMI is TMI, Part I

When PMI is TMI, Part II

But Can You Failback?

Your disaster recovery plans need to address not only failing over to standby resources, but also how you’re going to make your way back when the time comes.

Many organizations practice failover testing, wherein they engage a backup / secondary site in the event of trouble with their primary location / infrastructure. They run disaster recovery tests, and if they are able to continue their processing with their backup plan, they think they’re in pretty good shape. In a way, they are. However, few organizations think much about failing back – that is, to come back from their standby location / alternate hardware and resume production operations using their normal set-up. It is a non-trivial thing to do, as it requires data flows and perhaps even operational process adjustments that the organization almost never gets to practice. Further, it’s almost certainly part of what you’ll have to do (eventually) in the event of a mishap. Yet, most disaster recovery plans don’t even address failback – it’s just assumed that the organization will be able to do it once the primary site is ready again. It’s worth making this return trip part of your disaster recovery plan.

I once worked for an organization that was big on disaster recovery / business continuity. They were pretty good at the business continuity part, and could move the work done (paying people) from one location to another pretty seamlessly in the event of crippling snowstorms, fires, and the like. (If you think that sort of thing is easy, then you probably haven’t done it.) However, their ability to do this was facilitated by the fact that all sites shared a common data center in a separate location.

Given this dependency, they also spent a fair amount of time planning for data center outages. They had regular back-ups to a remote location equipped with standy hardware, and would run twice-yearly simulations of losing the primary site and continuing operations using the backup site. After a few attempts, they had most of the kinks worked out, and understood the timings and dependencies associated with such failover pretty well. Here I should mention that failing over a data center is much more complicated than failing over a single server, just as swapping out one piece of an engine is much easier than swapping out all of its parts and making sure they still align with one another.

One weekend, the feared “big event” arrived – our primary data center was flooded by some kind of pipe failure. Operations stopped dead. Yet, we didn’t fail over to our backup site, as almost everyone would have expected, because somebody asked a simple question – how do we come back?

The secondary site was clearly a temporary – understaffed and underpowered to do what we needed to do on a long-term basis. Using it was always assumed to be a sprint – a temporary burst of effort that would soon subside. (Imagine the cost of having it be otherwise, and the meter on said cost going up for year after uneventful year.) Such an arrangement requires some plan for ending the sprint, and coming back to the normal production site as expediently as possible.

We had never practiced coming back.

Fortunately for us, the outage happened at a low period in our monthly cycle, where we could skate by with a few days of downtime. (If it had happened a week earlier or a week later, then millions of people wouldn’t have gotten paid, and you would have heard about it on the six o’clock news.) So, rather than move things over to a temporary platform and then moving them back just in time for our big rush, we just waited for the primary site to be made available again. In many ways, this was bad, but the higher-ups decided that it was better than the risks we would run by failing back without strong, tested processes right before our busy time. (The risks would have been nasty things like overpayments, missed payments, erroneous payments, etc.)

So, we sat tight, twiddled some thumbs, and waited out the outage while the people at the primary data center worked like lunatics in order to restore it in time. Our often-practiced disaster recovery plan had proven to be a failure, not because we couldn’t handle a disaster, but because we couldn’t handle returning to normal in a reliable and predictable way.

So, if your disaster recovery plans include failing over to an alternate location or even alternate hardware, make sure they also specify what happens after the clouds lift, and practice those things too. The middle of a disaster is not the time to be sorting out those kinds of details.

Don’t Innovate in Your User Interface

Innovative user interfaces are probably lousy, no matter how sensible or well thought-out they may be, because by definition they break the cardinal rule of usability – analogy to something the user already knows.

It’s much better to bring a better conceptual metaphor to a problem set than to build a better UI widget that doesn’t quite work like all of those other not-so-innovative UI widgets.

You’re a plumber, not Picasso. Your UIs aren’t a canvas – their point is to move crap around.

A few special apps can violate this rule – odds are, yours isn’t one of those.

How Systems Get Exploited – Content Becomes Structure

Don’t let content become structure.

There are many different ways in which an information system can be exploited, to include buffer overflows, SQL injection, cross-site scripting, etc. However, the vast majority of common exploits can be avoided by adherence to a single principle:

Don’t let content become structure.

In the categories of exploits listed above, maliciously crafted content breaks out of its proper role and becomes structure (instructions) that the system follows . If you can ensure that the values your system manipulates never become instructions for the system to execute, then you’ll probably be okay in terms of exploits in the products you build yourself. (Password management and platform hardening are different stories.) The ways in which you keep content from becoming structure are technology-specific (sanitizing form inputs, using JDBC parameters, etc), but the underlying principle applies to most of the security holes you’re likely to create / avoid.

How to Estimate a Software Development Schedule

At first, the only way to estimate is badly. Then, the only way to estimate well is to keep estimating.

If you listen to what different camps are saying about software development schedule estimation, you’ll find that much of the discussion can be viewed as riffs on the “agile” vs “traditional” divide. Undoubtedly, such discussion raises many instructive points about estimation techniques, but it frequently overlooks a basic truth about estimation:

Everybody is terrible at estimating the first few times around, no matter how they do it.

This principle is often misapplied as a condemnation of traditional estimation techniques. I frequently hear people dismissing traditional techniques, such as Gantt charts, because they found that some attempt to use the technique was largely unsuccessful. That’s a bit like dismissing the piano because you can’t play like Mozart the first time out.

Any estimation technique you employ will require practice. The first time you estimate a project of non-trivial duration, your estimates will likely be horrifically wrong. The same might go for the second and third times as well. However, sooner or later you’ll catch on to certain things (e. g., sign-offs always seem to take a week, Joe low-balls everything, Susan always forgets to budget time for her testing), and you’ll revise your estimates accordingly. Over time, the quality and fidelity of those estimates will undoubtedly improve.

So-called “agile” estimation techniques are smart enough to formulate this truth more explicitly, by emphasizing that you need multiple sprints consisting of the same team members in order to establish a reliable velocity. However, you’re not just collecting velocity data over those multiple sprints, as though you were dipping a thermometer in different parts of a pool in order to establish an average water temperature. You’re also honing the team’s ability to estimate things by giving them estimation practice coupled with near-immediate feedback on how good their estimates were. Over the course of those sprints, the team is getting their hands dirty in the nitty-gritty of estimation, and learning how to do it better each time out.

The punchline here is that somebody working with an agile method is likely to learn how to estimate faster/better because of short cycle times, but this doesn’t mean that traditional methods don’t work – it just takes longer to get good enough at using the traditional methods, and many folks give up after the first or second time out.

Much more important than any specific method of estimation is the determination to keep applying your chosen method until you become competent with it.

Piwik Reports Chrome Frame as Chrome

Piwik 1.2.1 reports IE with Chrome Frame as pure Chrome. Just sayin’.

I’m playing with Piwik 1.2.1 – it’s a very nice tool, but it has a browser identification quirk that varies from the behavior of Google Analytics. I was running a copy of IE 7 with <a title="Chrome Frame site" href="http://www.google .com/chromeframe” target=”_blank”>Chrome Frame, and Piwik was reporting those visits as coming from Chrome. That is, Piwik seemed to recognize / record no difference between true Chrome visits and visits using IE with Chrome Frame – all were lumped in the same bucket.

In contrast, Google Analytics reports Chrome Frame usage as coming from “IE with Chrome Frame.” Use of full-on Chrome is reported as “Chrome,” so that you can really differentiate between the two configurations.

By the way, you should also bear in mind that both Piwik and Google Analytics are just reading the agent string header passed by the browser. If a user has Chrome Frame installed, it shows up in the agent string, even if the add-on isn’t currently active (that is, even if the visitor isn’t actually seeing things rendered by Chrome Frame). The only way to get Chrome out of the agent string, and have the browser present to Piwik / Google Analytics as plain old Internet Explorer, is to uninstall Chrome Frame.

What Between Means in SQL (Not What You’d Think)

In SQL, “BETWEEN X AND Y” means “greater than or equal to X and less than or equal to Y” – not necessarily what you might infer from its plain-English counterpart.

Is 6 between 3 and 9? Yes.

Is 6 between 9 and 3? Yes, but not in SQL.

In SQL, we use the BETWEEN operator to determine whether a value falls between two other values. We might think that using a clause like “BETWEEN X AND Y” establishes a range between X and Y, and that value comparisons will be true as long as the value being compared falls into that range.

However, the BETWEEN operator works differently. “BETWEEN X AND Y” actually translates into “greater than or equal to X and less than or equal to Y.” This means that X and Y have to be placed in ascending order for the statement to work correctly – X must be the lower of the two values.

Here’s a screenshot demonstrating this in Oracle, but virtually any database works the same way. I’ve included examples for both numbers and characters.

Screenshot of using BETWEEN
Arguments to BETWEEN must be in ascending order

Usually, this behavior isn’t a problem. Especially if you are working with literals, as in the screenshot above, you’re likely to put the lower value first, just as a matter of habit. However, if you are performing logic based on variables or column values, you need to make sure that the BETWEEN clause gets the lower end of the range first, or else your results may not be what you expect.

The Many Sides of You

Somebody probably hates you right now, no matter how nice you think you are.

I was recently reviewing evaluations I received for an online course I taught. Two strikingly different evaluations happened to be consecutive in the list. Here’s the first:

A bad teaching evaluation, obviously intended for my evil twin

 

 

 

Yowza! I must be awful. However, apparently not in everybody’s mind, because here’s the second:

 

 

A good teaching evaluation - I'm glad mom took the class

As it turns out, across the 8 evaluations I received, my average grade was an 8, and my median grade was a 9. So, most folks thought I was half-decent, but that one camper was definitely not happy with me.

Here’s my speculation – I pissed that person off somehow. If I had just been not-so-good, I wouldn’t have received the zeros or the scathing commentary. I would have instead gotten poor-ish grades and inspired maybe a sentence about how I’m not worth the student’s hard-earned tuition dollars. A zero, however, is a revenge grade – a totally different ballpark.

Here’s the scary thing – I have no idea why anybody in that class would have given me that grade. I don’t remember any significant conflicts with anybody. As I recall, I was fairly easygoing, and nobody did particularly poorly. Yet, at some point, something I said or did drove this person into a fury, and they vented using the only weapon at their disposal. While the specific mark was engulfed by the numbers surrounding it, its lesson is indelible: sooner or later, we are bound to make somebody around us fuming mad for reasons that are totally mysterious to us. It follows that the people at whom we fume may not always know why we are angry. Your arch-nemesis may think of you as that grumpy person who doesn’t ever say hello.

We experience each other as facets, particularly when we have only limited interactions. It’s hard, maybe pointless, for anybody to fret over all of their facets. Better to aim for a high average and get comfortable with the idea of an occasional zero.

Does Your Process Scale?

Processes, like systems, have to scale both vertically and horizontally.

We know that the systems we build have to perform under pressure. So, if we’re smart (and not so burdened that we are just barely getting things out the door), we do load testing, concurrency testing, scalability testing and all the other things that help us to ensure that our products can step confidently out of the development ecosystem and handle the workaday rigors of production use.

What about our processes? Often, we test a new process or methodology in isolation, on some low-risk demo project . We see how things work. We fidget, tweak, and tune. Eventually, we add enough and subtract enough get the process humming along, until it’s just right (or at least good enough). Then, we say that we nailed down our process.

However, sometimes we forget that our process has to scale, just like our systems. Can the project management process you defined work for seven simultaneous projects all managed by the same PM? If it consumed most of the PM’s time on your simple demo project, what is going to happen to it when you try to apply it on a program or department level scale? What about those design artifacts you created for the demo project? It was nice to have class diagrams, sequence diagrams, stakeholder profiles and data dictionaries for the demo project, but can you sustain that level of documentation for your marquee product, the one that keeps eight developers continuously occupied?

Your systems have to scale, but your process does too. Use one that you can sustain consistently across the breadth of your concerns, given your available resources. Just because something worked on a demo project does not mean it will survive a collision with the rest of the stuff on your plate. Organizations often design processes for vertical scalability (adjusting the rigor and demands to the size of the effort), but fewer think about scaling horizontally (ensuring that you can still apply the process to many efforts of a similar size).

Relearning My Lesson

Already knowing something is a terrible reason for not learning it again.

Last year, I decided that I didn’t like a certain person, based on fleeting observations, cursory interactions, the testimony of others, and a raft of tenuous inferences.

Yesterday, circumstance put me in a position to actually talk with him for the first time. It turns out he’s a pretty good guy – intelligent, interesting, engaged, funny, and self-deprecating without telegraphing a sense of false humility. I was flat wrong about him.

I was surpised by my discovery, but of course, I shouldn’t have been. It’s happened to me before – making judgments based on shaky evidence, and coming to a false conclusion about somebody, whom I then write off, until I am proven dramatically wrong. Based on past situations of a similar nature, I should have known better about being so judgmental – actually, I did know better – but I repeated the mistake anyway.

Knowing something (e.g., don’t judge a book by its cover) and even really believing in it (having some emotional affinity for the knowledge) is quite different from mastering it. Mastering it, to the point where you are consistently faithful to it, takes a good bit of practice. The mere fact that you assent to a given proposition doesn’t make it second nature to you. It only becomes second nature when you purposefully keep it near the top of your consciousness, where you can recollect it readily and then live accordingly.

That’s why trite sayings, truisms, and other fundamentals that we come to take for granted are really very important to hear. For most ideas, repetition and reinforcement are as important as the original learning. While I sometimes chafe at the expression of basic truths / principles, and quietly dimiss those who offer them as unsophisticated, it turns out that I often need a refresher on such truths.  When I was 17, I was fairly good at calculus, but virtually all of that discipline is lost to me today, chiefly for lack of practice with it. I’d hate for that to happen with many other pieces of knowledge that are in principle already known to me.

Sooner or later, I’ll let my recent lesson about snap judgments drift out of memory, and not far after I’ll be doomed to repeat it. Hopefully, after I re-learn it enough times, I’ll get to the point where I finally know it – by heart, so to speak.