Institutional memory and mass layoffs during recessions

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com, or follow me on Twitter.

During recessions, companies sometimes fire too many people, losing the institutional memory that glues the organization together:

I’ve seen memory drains during down-turns in the economy. The company will offer the oldest, most-experienced, highest paid employees a buy-out or they’ll offer them five years extra pension/retirement if they leave now. Mgt figures that consultants can be hired if they ever need X, Y or Z done again at a fraction of the cost of the employees.

Many senior employees accept the offers, and then later mgt realizes that 95% of their Fortran or Cobol (or name your old programming language/product) experience has vanished. Thank god the software is mostly in maintenance mode, but client problems do still arise and what once took 15 mins to fix, now requires a week and phone calls to retirees begging them to help fix a client problem. They’ll pay them loads of money too just for a few minutes of work.

Consultants turn out to lack the domain specific knowledge required to fix the problem.

Bottom line is that sometimes, mgt is so focused on hitting short-term financial goals, that they lose site of the big, long-term picture. Business needs less 20 something MBAs who only see this quarterly report and more managers who understand that business is a decades long proposition and that senior employees are key to maintaining an edge and passing along knowledge.

This is an odd suggestion, that I am doubtful about:

This is sad, scary, and as far as I’ve seen an inherent part of any long running engineering project. As software developers, we have a hard time understanding the rationale of some of the code we wrote last week, let alone someone else’s code from a year ago (I don’t want to imagine any of my company’s code running 30 years from now!).

In a recent series of design meetings I was running for a large project, I started to tackle some of these problems by emailing out the final decisions we had made at the end of each meeting. The following week we’d still have problems in understanding/remembering the chain of arguments that led to the final decisions, and we’d lose some time re-tracing our steps. So I then adjusted to not only summarizing the decisions at the end of each meeting, but noting step by step the arguments/debates that led to it. Then at the beginning of each meeting I’d review them to the group. This worked, but it was an awful lot of note taking.

I almost wonder that now that more and more of these discussions are done through work email, IM, version controlled documentation (and code), the job of future archaeologists will be much more manageable (though still hard and fraught with risk). Maybe more and more tools will develop to piece together a project’s evolution, rationale, and implicit assumptions and decisions.

This is a great response:

The problem with going through people’s old email and such is the signal/noise is extremely low, and often of a timely nature, and may not represent the final outcome, especially if work has been done on a system after the fact.

What we need is testable documentation. I do IT work most of the time, and document using a mashup of Markdown and text snippets in a specific format for network documentation, all kept in version control.

I then have a motley collection of scripts and tools that run against my text files and do simple testing like pinging all the hosts mentioned, checking DNS entries, etc. Frequently I’ll write up docs before I start work, then by the end of work, all the tests will past. It’s TDD applied to network design.

Post external references

  1. 1
    http://news.ycombinator.com/item?id=3390719
Source