The Mistake Zappos Made After the Hack

After being hacked and apparently exposing customer password data in some form, Zappos has made it impossible for you to tell what password you had used on their site.

Zappos just made a big mistake – their site was hacked, and information about 24 million customer accounts was stolen, including password hashes ( In response, Zappo has asked that its customers create new passwords for the site (

Part of the above-linked request reads as follows: “We also recommend that you change your password on any other web site where you use the same or a similar password.”

This implies that you should be able to tell somehow what your password with Zappos was. However, it looks as though any attempt to log in with a recognized account (email address) generates the same message, regardless of what password you use:

Try it yourself – if you pick a likely-to-be-taken address (I used the surname “Miller” in the attempt above), you’ll get the message, regardless of what password you use. This shows that you can’t figure out what your Zappos password used to be, and so unless you remember it, you won’t be able to change it on other sites that use the same one.

What would have been better? It would have been better to force users through the password reset process using their registered email, but after allowing some way for them to verify that they had authenticated correctly with the formerly valid credentials.

Upgrading a Desupported Version of Ubuntu

When upgrading a desupported version of Ubuntu, make sure to include the prescribed URLs in your sources.list, but also comment out the corresponding URLs from the regular releases site.

Shamefully, I let the OS on the VPS (virtual private server) which runs this blog fall out of support. I was running Ubuntu 9.10 (Karmic Koala – shudder), which apparently went of of support in – yikes! – April of 2011.

Okay, so no big deal, right? I’ll just upgrade to a supported version. However, once your flavor gets too far out of support, it turns out that the baked-in upgrade tools don’t work so well any more, and you have to do some extra gymnastics. Specifically, some of the packages you need in order to upgrade get kicked off of and moved to, so you have to edit /etc/apt/sources.list to include some references to the Ubuntu retirement home. The basic process is described on the Ubuntu “End of Life” upgrade page, and Google-fu will reveal a number of helpful tutorials on the subject.

I did the things prescribed in the above-linked EOL upgrade instructions, but when I went to install the new version of update-manager-core in preparation for the upgrade, I was met with a raft of 404 errors. I re-read the instructions, Googled like crazy, ran apt-get using every switch imaginable, and even sprinkled some goat’s blood on my keyboard (#offensive!), but I still couldn’t get it to work. I got pretty frustrated, especially because I don’t have much depth in this area, and am ill-equipped to problem solve if the cookbook approach fails.

After some swearing and deliberation, I finally did something that may seem common-sensical but which was not at all obvious to me. In fact, when I did it, I thought I was throwing up a Hail Mary. I had already added the required sources in /etc/apt/sources.list, as below:

## EOL upgrade sources.list
# Required
deb karmic main restricted universe multiverse
deb karmic-updates main restricted universe multiverse
deb karmic-security main restricted universe multiverse

# Optional
#deb karmic-backports main restricted universe multiverse

So, on what I thought was a long shot, I commented out the corresponding entries for, and gave it a whirl again. To my surprise, that was the special sauce I needed, and the update manager refresh proceeded without a hitch. (If only that were true for the rest of my upgrade, but that’s a tale for another time.)

So, maybe that last part was laughably obvious to some, but it certainly wasn’t obvious to me (how should I know what kind of search chain is being used?), and I hadn’t seen it in any of the “cut and paste” cookbooks I had consulted. Hopefully, some other Linux-inept schmuck will read this and be spared some of my torments.

Meaty Methods and Novel Names

Measure the substance of a process by what is left over when you remove its terminology.

Processes, methodologies, and frameworks often exhibit a compulsion to invent novel names for ordinary concepts, and to create highly specialized senses of everyday words. Examples include “Voice of the Customer” in Six Sigma and “fast tracking” in PMI parlance. Sometimes, there are benefits to having such names, but these are mainly  associated with efficiencies in explaining the process/method/framework itself. Even if a specialized vocabulary would help me communicate with other members of the cognoscenti, I’m usually communicating with a mixed audience, and so my grasp of the vocabulary atrophies for want of practice. (When I mentioned “fast tracking” a few sentences back, I had to go look up the term to remind myself of its specific meaning – i.e., doing things in parallel – in the PMI world.)

Outside of the phase in which somebody is learning about the process/method/framework, I’m not sure there’s much to be gained from wrapping special terminology around common sense concepts. In large part, I judge the substance of a process/method/framework (not its correctness or applicability – those are different matters) in terms of what’s offered outside of the vocabulary. If I didn’t have to use your terminology (hint: I probably won’t), how much is left to learn? That’s the meat.

The high-level DMAIC (Define / Measure / Analyze / Improve / Control) process of Six Sigma fame is not very meaty. It’s more like terminological shorthand for how an intelligent and methodical person would go about solving a problem in a serious way. You don’t learn it so much as recognize what the Six Sigma folks want you to call it. In contrast, the idea that I should focus on process control via standard deviations and have to measure conscientiously in order to do so – a key tenet of Six Sigma that fills out the intended implementation of DMAIC – is pretty meaty.

So, You Wrote a Document

A great document won’t breathe life into an idea, but salesmanship will.

So, you wrote a document. That’s a great start, but it’s only a start. Whatever is in there – an argument, an idea, a plan, a process – may be fantastic, but it’s not going to succeed because of the document. The document was mainly for you – a way to organize your thoughts. Its mere existence will lend some credibility to whatever it is that you’re pushing, but few people are going to really read it, and next to nobody will read it more than once.

Now that the document is done, it’s time to start working on your elevator pitch. For better or worse, the 60-second version of your position will have much more impact than the 60-page version. It’s time to start selling.

If It Were Easy, It Might Still Remain Undone

Don’t underestimate the number of problems that can be solved by expending slightly more effort and attention than your predecessors did .

“If it were easy, it would be done already.”

This is something I hear frequently. On the face of things, it sounds reasonable – something a been-around-the-block veteran says to the starry-eyed newcomer who has yet to be introduced to reality. It puts the arrogant in their place, reminding them that they are not so much smarter / better / more motivated than those who came before them. It bespeaks practicality, and a certain worldly wisdom.

Often, it is just flat wrong.

Granted, one can reasonably expect that a valuable goal which has remained unattained will be attended by non-trivial obstacles. However, that is the sort of thing you bear in mind so as to temper your expectations – not the sort of thing that should slow you down in advance of locating and confronting those obstacles. The “if it were easy” refrain is more likely to be a catchphrase of complacency than a sign of sagacity. It is used as warrant for the worst kind of failure, the kind that comes from never even trying to succeed.

What suggests that you can achieve a goal that has remained unrealized, in spite of many competent predecessors?

  • Even good people tend to buy into organizational myths about how difficult things are or the reasons that such-and-such won’t work. The people before you might not have even tried to solve the problem because they were sold the same “if it were easy” notion. This dynamic becomes progressively self-reinforcing as more people come to brand a particular problem as hard, longstanding, etc. Such branding creates a kind of inertia that is difficult to overcome, and helps to explain why new faces are often able to make tremendous changes to an organization in a short amount of time – they have yet to be introduced to the contents of the “don’t bother” list.
  • Even good people get lazy sometimes – the problem you’re looking at may be one of those areas where nobody competent has ever really rolled up their sleeves yet.
  • You can’t tend to every problem – maybe nobody has quite gotten around to this one at all yet. (Don’t trust the organizational historians who say otherwise, unless they can credibly explain why your predecessors stopped short of the goal line.)
  • Achievements are incremental – the blockers that kept your target goal from being realized in the past may have been recently removed by a new information system, org structure, business rule change, etc. You might be the first one to stand on the shoulders of the giants who came before you, and do full justice to their achievements by building upon them.

Plenty of easy (or at least very manageable) things that are worth doing remain undone in every organization, and you’re not tilting at windmills just because you don’t accept the blithe alibis of those who don’t want to bother expending some uncommon effort. What’s relatively easy may very well remain undone. However, what’s already done is bound to be even easier, which is why so many people avoid the risks and uncertainties that attend meaningful progress.

Anticipation vs Explication

Any smart person can come up with a plausible explanation of something after the fact – you’re adding real value when you can predict an outcome in advance.

Casey Anthony was just found innocent of the murder of her daughter. Almost everybody, professional and layperson alike, thought she would be convicted.

Almost immediately, the pundits weighed in with explanations – the prosecution went too far, they said, in pushing for first degree murder. The evidence wasn’t strong enough for a first degree conviction, the prosecutors had overreached, a lesser charge would have been more appropriate, etc.

However, the prosecution had wrapped up its case weeks before the acquittal. If the pundits saw a mismatch between the charges and the evidence, they could have piped up then. Yet, most said nary a word about this, because they did not see it as an issue at the time. They, like the rest of the world, foresaw a conviction.

When the stunning acquital came, they were quick to manufacture reasons. These reasons made sense, in a way, but once you know how a story ends, it’s relatively easy for an intelligent person to select a set of facts and inferences that leads toward the outcome, and weave these together into an explanation of what happened. It’s much harder (and more valuable) to weave together the facts and probabilities in advance, and correctly divine an outcome before the movie ends.

The world needs more psychics and prophets – we have enough Monday morning quarterbacks and plausible explanations that come when the punchline is already obvious.

How to Define a Software Development Process

Your software development process should be contextual, likable, forceful, and readable, and you should get it this way by iterating over it.

Defining a development process is hard

So, you want to define a software development process for your organization. Good luck. If it isn’t hard work, then you’re probably not doing it right, because you ought to take many factors into account, and those factors tend to be very organization-specific. Although you can gather lots of great ideas from books and process gurus, your ideal process will be so influenced by environmental factors that no text or outsider can tell you what it should be. At the end of the day, all of the tough calls will be on you. Also, since the development process will affect every project and stakeholder in your organization, it’s a high-stakes proposition, and something you probably can’t afford to screw up.

What should the process cover?

Actually, figuring out what kinds of things the process should govern / specify is an essential part of the process definition. (No free lunch here.) It’s not just a matter of figuring out how to answer a series of questions in an organization-specific way, which would be hard enough. You also have to figure out which questions are worth answering in the context of your organization and its goals. A larger development shop may dicker productively over the standard set of documents that will be dissected in design reviews. A one-horse development shop probably doesn’t have the same concern, as there’s nobody to do such reviews.

So, what kinds of things might a development process address? The following rough categories would make for at least a decent start:

  • Activities that will happen: The “coding” part is more or less a given, but are you going to do ROI analyses, design reviews, independent testing, user acceptance testing, management approvals, formal training, etc?
  • Order in which activities will happen: Are you going to be more waterfall-ish, with code mostly following an requirements gathering and design, or are you going to be more agile-ish, and allow your code to grow up alongside your requirements and design?
  • Conditions on activities:  What circumstances require or preclude particular activities? What approvals or phase gates govern movement from one activity to another?
  • Deliverables produced: What kinds of products do you need to create in addition to the software? Design documents? Manuals?
  • Enforcement mechanisms: How will you ensure that the requirements of the process are being respected? Will you do process reviews? Audits?

Characteristics of a good process

Consonant with the above, it’s hard to offer much advice on what your process should look like, because your process needs to be tailored to its context. However, here are some meta-level suggestions about software process definition in general:

1) Software practitioners (engineers, project managers, etc) should like the process. They don’t need to take it with them to Disney World, but they shouldn’t be bristling and gritting their teeth as they struggle under its yoke. If practitioners don’t like the process, it won’t be used as intended, and will hence be a failure. Even if a disliked process is mandatory, people will figure out ways to achieve superficial compliance, even though they are really doing things some other way. Accordingly, the process should be designed mostly by the practitioners themselves. This will presumably lead to a process that the practitioners think is valuable, and will greatly increase the likelihood of broad buy-in. Management can and should levy requirements on the process, but the practitioners should be allowed to satisfy those requirements how they see fit.

2) The process must be forceful, which is to say that it must contain mostly shalls – things people have to do (if they want to avoid a beatin’), as opposed to shoulds – things that are recommended. When a group is trying to define a process, members will often weigh in with suggestions, guidelines, and best practices that represent their views on how software engineering should be done. This may seem useful at first, but any group of experienced professionals can say a great deal about how things should be done. Before long, you’ll find yourself writing a software engineering book, filled with a big pile of shoulds that individuals will happily ignore whenever timelines get tight or when they happen to disagree with the recommendation. The payoff of this book will be too small to justify the time it takes to write it. Instead, your process should focus on what is mandatory / required, even if it is only required in certain circumstances. There is value in a big pile of shoulds, but it belongs in knowledge sharing activities (lunch and learn stuff) as opposed to a process definition.

3) The process must be extremely concise (as minimal as possible) and readable. A process that expounds on its topics in an academic and/or lengthy manner will go unread, and a process that goes unread is a process soon to be dead.

4) The process should be developed iteratively, because iteration works for everything. Don’t wait until you have the whole thing nailed down before you deliver something. Rather, start piloting elements of the nascent process as soon as possible, so that you can gain some real-world experience and adjust accordingly. Particularly if your process levies new requirements on people, you’ll want to see where people are tempted to cut corners, and if they still agree in the heat of battle to the same things they agreed to on the whiteboard.

If the process you develop is sensitive to its context, respected by the people who will use it, slanted toward mandatory practices, and right to the point, it will likely be a success. Further, you are more likely to define such a process by iterating over it early and often.

Shared Technology and Ease of Integration

A shared set of technologies implies very little about how easy it will be to integrate two products.

In a past organization (of course, tales of foolishness are never explicitly set in one’s present organization), we were in the midst of a multi-year effort to build THE SYSTEM, which would replace 639 other systems, save a zillion dollars, unify the thirteen colonies, and restore balance to the Force.

In spite of (or maybe as a consequence of) its inestimable virtues, THE SYSTEM (hereafter “TS”) was taking a long honking time to build. As fate would have it, my corner of the organization had a semi-urgent need for a planned part of TS, but that part was not scheduled to be delivered for another two years. Unable to persuade the makers of TS to accelerate development of the piece we needed, multiple persons in my organization became afflicted with the idea that my development group could assume responsibility for building that one chunk of TS ourselves, on a timeline more congenial to our needs.

One person sold the concept thusly – since TS was being developed in Java, if we were to write our version of the sorely needed piece in Java, then said piece could easily be grafted onto the rest of TS when the time came. This strategy would allow us to get what we needed sooner, and would save the makers of TS time, because we would have implemented a significant feature for them, and all they would need to do is integrate it. They key to all of this was that we would build on the same platform (Java), so as to make integration of the two efforts relatively easy to achieve.

That’s a lot like saying that two documents can be easily merged together because both are written in English .

The truth is that a shared technical platform doesn’t buy you very much if you need to integrate two independently developed pieces of software. In fact, it may be a liability of sorts. If I’m trying to incorporate the capabilities of a Ruby application into a C# application, everybody recognizes that I’m going to have to rewrite the Ruby stuff. However, if I’m trying to incorporate one Java application into another, independently developed application, I’m still going to have to rewrite lots of stuff, and the shared platform may actually hurt me insofar as it leads people to assume otherwise.

Like two documents, two independently developed applications may share a common language, but will inevitably differ in goals, style, assumptions, and overall organization. These differences will not be abstracted and isolated in particular places, but will be suffused throughout the code of each in ways that are difficult to systematically identify, let alone tease out.

Although the object-orientation of a language like Java does go some way toward making an application more like a set of modular and reusable components, this capacity is often over-emphasized. The promise of a set of objects that faithfully model the real world and hence transcend problem-specific solutions is usually illusory. There is often no “real world” to be seen outside of some contextual problem space, as the problem space itself heavily colors designers’ perceptions of what the world to be modeled looks like. Designers solving different problems within what is in principle the same domain will inevitably see the domain in terms of the problem, and hence will model different worlds. As such, when it comes time to integrate their code, there will be no conceptual lingua franca to complement the syntax of their shared implementation language, making it virtually impossible to combine the two codebases without either substantially rewriting one or creating some huge integration layer that exists mainly to bridge the conceptual divide.

A shared language doesn’t get you very far, especially not in a world where most technologies can now talk to each other through XML. Even if you start with a shared development language, differences in goals, styles, assumptions and organization will mitigate strongly against the ability to integrate any two codebases without significant rework, unless those codebases were intentionally designed to be integrated  in the first place.

Iteration Works for Everything

Everybody knows the benefits of iteration for developing software, but iteration should be practiced for developing just about everything.

If you’re working in software development, professional literacy / competence demands some acquaintace with iterative development techniques. From intern to architect, it has been drilled into us all – iterate, iterate, iterate. Most of us can and do carry the tune – we deride the linear ways of waterfall (at least explicitly), and preach the gospel of iteration.

So, why do we iterate? I think these are the best reasons:

  • People often don’t know what they want – It takes seeing some kind of semi-realistic product before people realize what they do and don’t want from a system. Iterative development gives people slices of functionality that they can test-drive, thereby enabling the feedback needed for future iterations.
  • People often miscommunicate – Even when they know what they want, people aren’t always so good at expressing it. Further, even if well-expressed, a requirement might be easily misunderstood. Iteration allows people to look at something in progress and say very simply and specifically what is right or wrong about it.
  • Requirements change  – Targets have a way of moving, even the ones that have already been hit. Iteration gives you a way to continually re-validate requirements and to keep pace with fluid demands.

So, we all know the above, and countenance these truths in our development practices. Yet, for some odd reason, most of us reserve the valuable practice of iteration for creating code. When we need to produce user guides, proposals, standards, COTS evaluations, slide decks, and any other non-software deliverables, we revert to the dark ages of engineering, hoarding our labors and polishing them at length, waiting for the right moment at which to unleash them upon the world.

Your non-software deliverables are just as susceptible to miscommunication, moving targets, and imperfect requirements as your code is. As such, you should iterate over those as well. Don’t wait until you have a complete and thorough document / presentation / proposal before throwing your cards on the table. Instead, get something (e.g., a table of contents, outline, or synopsis) out to other stakeholders as soon as you can, and let the magic of iteration help you to make these products as good as they can be.

Iteration works for everything.

Rules for Meeting on the Maker’s Schedule

Schedule your meetings in a way that is cognizant of the significant difference between the maker’s schedule and the manager’s schedule.

If you manage people who are building software or otherwise doing something that requires sustained, intensive concentration, then this is something you need to read:

The basic gist of the essay is that the “tax” of holding a meeting is many times worse for a maker (people who build things, and require sustained concentration to do so) than for a manager. The manager tends to view the day as a series of hour-long slots into which obligations, especially meetings, must be fit. The maker, however, tries to get large blocks of uninterrupted time, because of the effort required to get back into the creative “zone” after an interruption. For a maker, three contiguous hours are much more valuable than three isolated hours. A typical software developer will get twice as much coding done from 1:00 to 4:00 as she would from 9:00 to 10:00, 11:00 to 12:00, and 2:00 to 3:00, even though the total time spent seems to be the same in both scenarios.

After reading this article, I now try to schedule meetings using the following rules, which assume that I can see the daily calendars of the makers with whom I’d like to meet:

1) If a maker already has an uninterrupted morning or afternoon, don’t touch it. That’s golden time, and biting into it is like cutting open the golden goose. Find another time. (This rule decreases greatly in importance if the maker appears to have lots of unscheduled time, but most engineers / developers have a decent number of meetings that they need to attend.)

2) Try to schedule the meeting adjacent to existing meetings, preferably on the side that allows the block of time on the opposite side to be as large as possible.

3) If the above don’t work out, schedule the meeting immediately before or after lunch.

4) As a corollary consideration, schedule your meetings with other managers at times like 10:00 AM and 2:00 PM, so that you will be free to meet with makers (as needed) on a schedule that is more congenial to their interests.

I’ve been trying to follow these rules for a month or so, and my folks think they make a difference. They report a little more time to tackle hard problems, and they appreciate the fact that I am cognizant of their scheduling needs . Of course, there are times when I have to violate the rules, particularly when the meeting is large (making it much harder to find common free time) or when it involves somebody who is chronically over-scheduled. For example, my boss has an amazingly small amount of uncalendared time in any given week, and so if I need to get him in a meeting with my folks, I’m pretty much bound to pick whatever free time he might have.

My scheduling concerns are relatively complicated because I have a geographically distributed team, part of which is on eastern time (Philadelphia) and part of which is on mountain time (Denver). So, for team meetings, it’s harder to get a time that works for everybody. 11:00 AM eastern works pretty well because the Philly folks will go to lunch right after and the Denver folks (2 hours behind) are just starting their day. 1:00 PM eastern is also good for similar reasons (Philly folks have lunch before, and Denver folks go to lunch after).

The difference between maker’s time and manager’s time is pretty obvious in retrospect, but most managers are surprisingly oblivious to it until it is pointed out to them. (I was.) Accounting for the difference is relatively simple, and likely to pay big dividends. So, I’d recommend coming up with your own set of rules for meeting on the maker’s schedule, and sharing them with the other managers in your organization. You’ll make your makers happy if you do.