Upgrading a Desupported Version of Ubuntu

When upgrading a desupported version of Ubuntu, make sure to include the prescribed old-releases.ubuntu.com URLs in your sources.list, but also comment out the corresponding URLs from the regular releases site.

Shamefully, I let the OS on the VPS (virtual private server) which runs this blog fall out of support. I was running Ubuntu 9.10 (Karmic Koala – shudder), which apparently went of of support in – yikes! – April of 2011.

Okay, so no big deal, right? I’ll just upgrade to a supported version. However, once your flavor gets too far out of support, it turns out that the baked-in upgrade tools don’t work so well any more, and you have to do some extra gymnastics. Specifically, some of the packages you need in order to upgrade get kicked off of http://releases.ubuntu.com and moved to http://old-releases.ubuntu.com, so you have to edit /etc/apt/sources.list to include some references to the Ubuntu retirement home. The basic process is described on the Ubuntu “End of Life” upgrade page, and Google-fu will reveal a number of helpful tutorials on the subject.

I did the things prescribed in the above-linked EOL upgrade instructions, but when I went to install the new version of update-manager-core in preparation for the upgrade, I was met with a raft of 404 errors. I re-read the instructions, Googled like crazy, ran apt-get using every switch imaginable, and even sprinkled some goat’s blood on my keyboard (#offensive!), but I still couldn’t get it to work. I got pretty frustrated, especially because I don’t have much depth in this area, and am ill-equipped to problem solve if the cookbook approach fails.

After some swearing and deliberation, I finally did something that may seem common-sensical but which was not at all obvious to me. In fact, when I did it, I thought I was throwing up a Hail Mary. I had already added the required sources in /etc/apt/sources.list, as below:

## EOL upgrade sources.list
# Required
deb http://old-releases.ubuntu.com/ubuntu/ karmic main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ karmic-updates main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ karmic-security main restricted universe multiverse

# Optional
#deb http://old-releases.ubuntu.com/ubuntu/ karmic-backports main restricted universe multiverse

So, on what I thought was a long shot, I commented out the corresponding entries for releases.ubuntu.com, and gave it a whirl again. To my surprise, that was the special sauce I needed, and the update manager refresh proceeded without a hitch. (If only that were true for the rest of my upgrade, but that’s a tale for another time.)

So, maybe that last part was laughably obvious to some, but it certainly wasn’t obvious to me (how should I know what kind of search chain is being used?), and I hadn’t seen it in any of the “cut and paste” cookbooks I had consulted. Hopefully, some other Linux-inept schmuck will read this and be spared some of my torments.

Meaty Methods and Novel Names

Measure the substance of a process by what is left over when you remove its terminology.

Processes, methodologies, and frameworks often exhibit a compulsion to invent novel names for ordinary concepts, and to create highly specialized senses of everyday words. Examples include “Voice of the Customer” in Six Sigma and “fast tracking” in PMI parlance. Sometimes, there are benefits to having such names, but these are mainly  associated with efficiencies in explaining the process/method/framework itself. Even if a specialized vocabulary would help me communicate with other members of the cognoscenti, I’m usually communicating with a mixed audience, and so my grasp of the vocabulary atrophies for want of practice. (When I mentioned “fast tracking” a few sentences back, I had to go look up the term to remind myself of its specific meaning – i.e., doing things in parallel – in the PMI world.)

Outside of the phase in which somebody is learning about the process/method/framework, I’m not sure there’s much to be gained from wrapping special terminology around common sense concepts. In large part, I judge the substance of a process/method/framework (not its correctness or applicability – those are different matters) in terms of what’s offered outside of the vocabulary. If I didn’t have to use your terminology (hint: I probably won’t), how much is left to learn? That’s the meat.

The high-level DMAIC (Define / Measure / Analyze / Improve / Control) process of Six Sigma fame is not very meaty. It’s more like terminological shorthand for how an intelligent and methodical person would go about solving a problem in a serious way. You don’t learn it so much as recognize what the Six Sigma folks want you to call it. In contrast, the idea that I should focus on process control via standard deviations and have to measure conscientiously in order to do so – a key tenet of Six Sigma that fills out the intended implementation of DMAIC – is pretty meaty.

So, You Wrote a Document

A great document won’t breathe life into an idea, but salesmanship will.

So, you wrote a document. That’s a great start, but it’s only a start. Whatever is in there – an argument, an idea, a plan, a process – may be fantastic, but it’s not going to succeed because of the document. The document was mainly for you – a way to organize your thoughts. Its mere existence will lend some credibility to whatever it is that you’re pushing, but few people are going to really read it, and next to nobody will read it more than once.

Now that the document is done, it’s time to start working on your elevator pitch. For better or worse, the 60-second version of your position will have much more impact than the 60-page version. It’s time to start selling.

If It Were Easy, It Might Still Remain Undone

Don’t underestimate the number of problems that can be solved by expending slightly more effort and attention than your predecessors did .

“If it were easy, it would be done already.”

This is something I hear frequently. On the face of things, it sounds reasonable – something a been-around-the-block veteran says to the starry-eyed newcomer who has yet to be introduced to reality. It puts the arrogant in their place, reminding them that they are not so much smarter / better / more motivated than those who came before them. It bespeaks practicality, and a certain worldly wisdom.

Often, it is just flat wrong.

Granted, one can reasonably expect that a valuable goal which has remained unattained will be attended by non-trivial obstacles. However, that is the sort of thing you bear in mind so as to temper your expectations – not the sort of thing that should slow you down in advance of locating and confronting those obstacles. The “if it were easy” refrain is more likely to be a catchphrase of complacency than a sign of sagacity. It is used as warrant for the worst kind of failure, the kind that comes from never even trying to succeed.

What suggests that you can achieve a goal that has remained unrealized, in spite of many competent predecessors?

  • Even good people tend to buy into organizational myths about how difficult things are or the reasons that such-and-such won’t work. The people before you might not have even tried to solve the problem because they were sold the same “if it were easy” notion. This dynamic becomes progressively self-reinforcing as more people come to brand a particular problem as hard, longstanding, etc. Such branding creates a kind of inertia that is difficult to overcome, and helps to explain why new faces are often able to make tremendous changes to an organization in a short amount of time – they have yet to be introduced to the contents of the “don’t bother” list.
  • Even good people get lazy sometimes – the problem you’re looking at may be one of those areas where nobody competent has ever really rolled up their sleeves yet.
  • You can’t tend to every problem – maybe nobody has quite gotten around to this one at all yet. (Don’t trust the organizational historians who say otherwise, unless they can credibly explain why your predecessors stopped short of the goal line.)
  • Achievements are incremental – the blockers that kept your target goal from being realized in the past may have been recently removed by a new information system, org structure, business rule change, etc. You might be the first one to stand on the shoulders of the giants who came before you, and do full justice to their achievements by building upon them.

Plenty of easy (or at least very manageable) things that are worth doing remain undone in every organization, and you’re not tilting at windmills just because you don’t accept the blithe alibis of those who don’t want to bother expending some uncommon effort. What’s relatively easy may very well remain undone. However, what’s already done is bound to be even easier, which is why so many people avoid the risks and uncertainties that attend meaningful progress.

Anticipation vs Explication

Any smart person can come up with a plausible explanation of something after the fact – you’re adding real value when you can predict an outcome in advance.

Casey Anthony was just found innocent of the murder of her daughter. Almost everybody, professional and layperson alike, thought she would be convicted.

Almost immediately, the pundits weighed in with explanations – the prosecution went too far, they said, in pushing for first degree murder. The evidence wasn’t strong enough for a first degree conviction, the prosecutors had overreached, a lesser charge would have been more appropriate, etc.

However, the prosecution had wrapped up its case weeks before the acquittal. If the pundits saw a mismatch between the charges and the evidence, they could have piped up then. Yet, most said nary a word about this, because they did not see it as an issue at the time. They, like the rest of the world, foresaw a conviction.

When the stunning acquital came, they were quick to manufacture reasons. These reasons made sense, in a way, but once you know how a story ends, it’s relatively easy for an intelligent person to select a set of facts and inferences that leads toward the outcome, and weave these together into an explanation of what happened. It’s much harder (and more valuable) to weave together the facts and probabilities in advance, and correctly divine an outcome before the movie ends.

The world needs more psychics and prophets – we have enough Monday morning quarterbacks and plausible explanations that come when the punchline is already obvious.

How to Define a Software Development Process

Your software development process should be contextual, likable, forceful, and readable, and you should get it this way by iterating over it.

Defining a development process is hard

So, you want to define a software development process for your organization. Good luck. If it isn’t hard work, then you’re probably not doing it right, because you ought to take many factors into account, and those factors tend to be very organization-specific. Although you can gather lots of great ideas from books and process gurus, your ideal process will be so influenced by environmental factors that no text or outsider can tell you what it should be. At the end of the day, all of the tough calls will be on you. Also, since the development process will affect every project and stakeholder in your organization, it’s a high-stakes proposition, and something you probably can’t afford to screw up.

What should the process cover?

Actually, figuring out what kinds of things the process should govern / specify is an essential part of the process definition. (No free lunch here.) It’s not just a matter of figuring out how to answer a series of questions in an organization-specific way, which would be hard enough. You also have to figure out which questions are worth answering in the context of your organization and its goals. A larger development shop may dicker productively over the standard set of documents that will be dissected in design reviews. A one-horse development shop probably doesn’t have the same concern, as there’s nobody to do such reviews.

So, what kinds of things might a development process address? The following rough categories would make for at least a decent start:

  • Activities that will happen: The “coding” part is more or less a given, but are you going to do ROI analyses, design reviews, independent testing, user acceptance testing, management approvals, formal training, etc?
  • Order in which activities will happen: Are you going to be more waterfall-ish, with code mostly following an requirements gathering and design, or are you going to be more agile-ish, and allow your code to grow up alongside your requirements and design?
  • Conditions on activities:  What circumstances require or preclude particular activities? What approvals or phase gates govern movement from one activity to another?
  • Deliverables produced: What kinds of products do you need to create in addition to the software? Design documents? Manuals?
  • Enforcement mechanisms: How will you ensure that the requirements of the process are being respected? Will you do process reviews? Audits?

Characteristics of a good process

Consonant with the above, it’s hard to offer much advice on what your process should look like, because your process needs to be tailored to its context. However, here are some meta-level suggestions about software process definition in general:

1) Software practitioners (engineers, project managers, etc) should like the process. They don’t need to take it with them to Disney World, but they shouldn’t be bristling and gritting their teeth as they struggle under its yoke. If practitioners don’t like the process, it won’t be used as intended, and will hence be a failure. Even if a disliked process is mandatory, people will figure out ways to achieve superficial compliance, even though they are really doing things some other way. Accordingly, the process should be designed mostly by the practitioners themselves. This will presumably lead to a process that the practitioners think is valuable, and will greatly increase the likelihood of broad buy-in. Management can and should levy requirements on the process, but the practitioners should be allowed to satisfy those requirements how they see fit.

2) The process must be forceful, which is to say that it must contain mostly shalls – things people have to do (if they want to avoid a beatin’), as opposed to shoulds – things that are recommended. When a group is trying to define a process, members will often weigh in with suggestions, guidelines, and best practices that represent their views on how software engineering should be done. This may seem useful at first, but any group of experienced professionals can say a great deal about how things should be done. Before long, you’ll find yourself writing a software engineering book, filled with a big pile of shoulds that individuals will happily ignore whenever timelines get tight or when they happen to disagree with the recommendation. The payoff of this book will be too small to justify the time it takes to write it. Instead, your process should focus on what is mandatory / required, even if it is only required in certain circumstances. There is value in a big pile of shoulds, but it belongs in knowledge sharing activities (lunch and learn stuff) as opposed to a process definition.

3) The process must be extremely concise (as minimal as possible) and readable. A process that expounds on its topics in an academic and/or lengthy manner will go unread, and a process that goes unread is a process soon to be dead.

4) The process should be developed iteratively, because iteration works for everything. Don’t wait until you have the whole thing nailed down before you deliver something. Rather, start piloting elements of the nascent process as soon as possible, so that you can gain some real-world experience and adjust accordingly. Particularly if your process levies new requirements on people, you’ll want to see where people are tempted to cut corners, and if they still agree in the heat of battle to the same things they agreed to on the whiteboard.

If the process you develop is sensitive to its context, respected by the people who will use it, slanted toward mandatory practices, and right to the point, it will likely be a success. Further, you are more likely to define such a process by iterating over it early and often.

Shared Technology and Ease of Integration

A shared set of technologies implies very little about how easy it will be to integrate two products.

In a past organization (of course, tales of foolishness are never explicitly set in one’s present organization), we were in the midst of a multi-year effort to build THE SYSTEM, which would replace 639 other systems, save a zillion dollars, unify the thirteen colonies, and restore balance to the Force.

In spite of (or maybe as a consequence of) its inestimable virtues, THE SYSTEM (hereafter “TS”) was taking a long honking time to build. As fate would have it, my corner of the organization had a semi-urgent need for a planned part of TS, but that part was not scheduled to be delivered for another two years. Unable to persuade the makers of TS to accelerate development of the piece we needed, multiple persons in my organization became afflicted with the idea that my development group could assume responsibility for building that one chunk of TS ourselves, on a timeline more congenial to our needs.

One person sold the concept thusly – since TS was being developed in Java, if we were to write our version of the sorely needed piece in Java, then said piece could easily be grafted onto the rest of TS when the time came. This strategy would allow us to get what we needed sooner, and would save the makers of TS time, because we would have implemented a significant feature for them, and all they would need to do is integrate it. They key to all of this was that we would build on the same platform (Java), so as to make integration of the two efforts relatively easy to achieve.

That’s a lot like saying that two documents can be easily merged together because both are written in English .

The truth is that a shared technical platform doesn’t buy you very much if you need to integrate two independently developed pieces of software. In fact, it may be a liability of sorts. If I’m trying to incorporate the capabilities of a Ruby application into a C# application, everybody recognizes that I’m going to have to rewrite the Ruby stuff. However, if I’m trying to incorporate one Java application into another, independently developed application, I’m still going to have to rewrite lots of stuff, and the shared platform may actually hurt me insofar as it leads people to assume otherwise.

Like two documents, two independently developed applications may share a common language, but will inevitably differ in goals, style, assumptions, and overall organization. These differences will not be abstracted and isolated in particular places, but will be suffused throughout the code of each in ways that are difficult to systematically identify, let alone tease out.

Although the object-orientation of a language like Java does go some way toward making an application more like a set of modular and reusable components, this capacity is often over-emphasized. The promise of a set of objects that faithfully model the real world and hence transcend problem-specific solutions is usually illusory. There is often no “real world” to be seen outside of some contextual problem space, as the problem space itself heavily colors designers’ perceptions of what the world to be modeled looks like. Designers solving different problems within what is in principle the same domain will inevitably see the domain in terms of the problem, and hence will model different worlds. As such, when it comes time to integrate their code, there will be no conceptual lingua franca to complement the syntax of their shared implementation language, making it virtually impossible to combine the two codebases without either substantially rewriting one or creating some huge integration layer that exists mainly to bridge the conceptual divide.

A shared language doesn’t get you very far, especially not in a world where most technologies can now talk to each other through XML. Even if you start with a shared development language, differences in goals, styles, assumptions and organization will mitigate strongly against the ability to integrate any two codebases without significant rework, unless those codebases were intentionally designed to be integrated  in the first place.

Rules for Meeting on the Maker’s Schedule

Schedule your meetings in a way that is cognizant of the significant difference between the maker’s schedule and the manager’s schedule.

If you manage people who are building software or otherwise doing something that requires sustained, intensive concentration, then this is something you need to read:


The basic gist of the essay is that the “tax” of holding a meeting is many times worse for a maker (people who build things, and require sustained concentration to do so) than for a manager. The manager tends to view the day as a series of hour-long slots into which obligations, especially meetings, must be fit. The maker, however, tries to get large blocks of uninterrupted time, because of the effort required to get back into the creative “zone” after an interruption. For a maker, three contiguous hours are much more valuable than three isolated hours. A typical software developer will get twice as much coding done from 1:00 to 4:00 as she would from 9:00 to 10:00, 11:00 to 12:00, and 2:00 to 3:00, even though the total time spent seems to be the same in both scenarios.

After reading this article, I now try to schedule meetings using the following rules, which assume that I can see the daily calendars of the makers with whom I’d like to meet:

1) If a maker already has an uninterrupted morning or afternoon, don’t touch it. That’s golden time, and biting into it is like cutting open the golden goose. Find another time. (This rule decreases greatly in importance if the maker appears to have lots of unscheduled time, but most engineers / developers have a decent number of meetings that they need to attend.)

2) Try to schedule the meeting adjacent to existing meetings, preferably on the side that allows the block of time on the opposite side to be as large as possible.

3) If the above don’t work out, schedule the meeting immediately before or after lunch.

4) As a corollary consideration, schedule your meetings with other managers at times like 10:00 AM and 2:00 PM, so that you will be free to meet with makers (as needed) on a schedule that is more congenial to their interests.

I’ve been trying to follow these rules for a month or so, and my folks think they make a difference. They report a little more time to tackle hard problems, and they appreciate the fact that I am cognizant of their scheduling needs . Of course, there are times when I have to violate the rules, particularly when the meeting is large (making it much harder to find common free time) or when it involves somebody who is chronically over-scheduled. For example, my boss has an amazingly small amount of uncalendared time in any given week, and so if I need to get him in a meeting with my folks, I’m pretty much bound to pick whatever free time he might have.

My scheduling concerns are relatively complicated because I have a geographically distributed team, part of which is on eastern time (Philadelphia) and part of which is on mountain time (Denver). So, for team meetings, it’s harder to get a time that works for everybody. 11:00 AM eastern works pretty well because the Philly folks will go to lunch right after and the Denver folks (2 hours behind) are just starting their day. 1:00 PM eastern is also good for similar reasons (Philly folks have lunch before, and Denver folks go to lunch after).

The difference between maker’s time and manager’s time is pretty obvious in retrospect, but most managers are surprisingly oblivious to it until it is pointed out to them. (I was.) Accounting for the difference is relatively simple, and likely to pay big dividends. So, I’d recommend coming up with your own set of rules for meeting on the maker’s schedule, and sharing them with the other managers in your organization. You’ll make your makers happy if you do.

One-Page Project Plans

Because project management is a social, participative activity, the effectiveness of particular methods is inevitably bound up with the culture and aptitudes of the organization in which these methods will be employed. For an organization with little project management experience, earned value calculations don’t appear to be meaningfully different than the blithe “we’re 90 percent done” statuses that plague poorly run projects. Similarly, scope statements and change management plans are unlikely to find many admirers in organizations that rely heavily on oral communication and often fly by the seat of their pants.
In such organizations, trying to manage projects with the overt use of PMI-style techniques is probably futile and may even be counter-productive. Not every organization is ready for the PMI way of doing things, and if formal project management techniques are introduced clumsily, the organization is likely to recoil and dismiss these as a waste of time. Instead, it is often better to meet the organization where it is and begin with simple methods having broad, intuitive appeal. 

The above is an excerpt from a 2-part series I wrote for Gantthead under the title “When PMI is TMI.” In that series, I advocated using one-page project plans for organizations that have minimal project management culture in place, because more elaborate methods are likely to meet with insufficient support in such organizations. While a one-page plan is obviously pretty thin, in many cases it would be preferable to more ambitious approaches that can fall into rapid disuse and/or actually turn people against formal project management techniques.

The articles are linked below, and have generated some favorable comments among the members of the Gantthead community. Part II also includes a sample of a one-page project plan that I used a few years back. Unfortunately, Gantthead requires registration before you can read an article in full – sorry about that.

 When PMI is TMI, Part I

When PMI is TMI, Part II

But Can You Failback?

Your disaster recovery plans need to address not only failing over to standby resources, but also how you’re going to make your way back when the time comes.

Many organizations practice failover testing, wherein they engage a backup / secondary site in the event of trouble with their primary location / infrastructure. They run disaster recovery tests, and if they are able to continue their processing with their backup plan, they think they’re in pretty good shape. In a way, they are. However, few organizations think much about failing back – that is, to come back from their standby location / alternate hardware and resume production operations using their normal set-up. It is a non-trivial thing to do, as it requires data flows and perhaps even operational process adjustments that the organization almost never gets to practice. Further, it’s almost certainly part of what you’ll have to do (eventually) in the event of a mishap. Yet, most disaster recovery plans don’t even address failback – it’s just assumed that the organization will be able to do it once the primary site is ready again. It’s worth making this return trip part of your disaster recovery plan.

I once worked for an organization that was big on disaster recovery / business continuity. They were pretty good at the business continuity part, and could move the work done (paying people) from one location to another pretty seamlessly in the event of crippling snowstorms, fires, and the like. (If you think that sort of thing is easy, then you probably haven’t done it.) However, their ability to do this was facilitated by the fact that all sites shared a common data center in a separate location.

Given this dependency, they also spent a fair amount of time planning for data center outages. They had regular back-ups to a remote location equipped with standy hardware, and would run twice-yearly simulations of losing the primary site and continuing operations using the backup site. After a few attempts, they had most of the kinks worked out, and understood the timings and dependencies associated with such failover pretty well. Here I should mention that failing over a data center is much more complicated than failing over a single server, just as swapping out one piece of an engine is much easier than swapping out all of its parts and making sure they still align with one another.

One weekend, the feared “big event” arrived – our primary data center was flooded by some kind of pipe failure. Operations stopped dead. Yet, we didn’t fail over to our backup site, as almost everyone would have expected, because somebody asked a simple question – how do we come back?

The secondary site was clearly a temporary – understaffed and underpowered to do what we needed to do on a long-term basis. Using it was always assumed to be a sprint – a temporary burst of effort that would soon subside. (Imagine the cost of having it be otherwise, and the meter on said cost going up for year after uneventful year.) Such an arrangement requires some plan for ending the sprint, and coming back to the normal production site as expediently as possible.

We had never practiced coming back.

Fortunately for us, the outage happened at a low period in our monthly cycle, where we could skate by with a few days of downtime. (If it had happened a week earlier or a week later, then millions of people wouldn’t have gotten paid, and you would have heard about it on the six o’clock news.) So, rather than move things over to a temporary platform and then moving them back just in time for our big rush, we just waited for the primary site to be made available again. In many ways, this was bad, but the higher-ups decided that it was better than the risks we would run by failing back without strong, tested processes right before our busy time. (The risks would have been nasty things like overpayments, missed payments, erroneous payments, etc.)

So, we sat tight, twiddled some thumbs, and waited out the outage while the people at the primary data center worked like lunatics in order to restore it in time. Our often-practiced disaster recovery plan had proven to be a failure, not because we couldn’t handle a disaster, but because we couldn’t handle returning to normal in a reliable and predictable way.

So, if your disaster recovery plans include failing over to an alternate location or even alternate hardware, make sure they also specify what happens after the clouds lift, and practice those things too. The middle of a disaster is not the time to be sorting out those kinds of details.