Business productivity has been undermined by the hubris and power-grabbing of elite computer programmers

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at:, or follow me on Twitter.

For many years, I had a refrain which I gave as advice to each client I worked with: “Your software developers are expensive, so try to shift work away from them.” Ideally, software developers should only do work that relies on skills that no one else has. If a task can be done by a graphic designer, then it should be done by a graphic designer, because generally graphic designers are paid less than software developers (obviously not in all cases, but most of the time). I co-founded a startup in 2002, and I stayed with it till 2008, and we ran a team of 8 people using this principle: push work to the less skilled people, if they can handle it. Save the tough stuff for the computer programmer. We had great success with this style of work.

This advice is still good advice, but the areas where it can be applied have shrunk, because new technologies simply require more software developer expertise to use. An obvious example would be what happened on the Web between 2005 and 2015, where we transitioned from using HTML/CSS to instead building systems with React/Relay or Angular or Vue. In 2005 it was easy (for a software developer) to offload a lot of work to graphic designers, in a way that is no longer possible.

It’s unclear that we gained much — the old system gave us Google Maps and Gmail and a lot of great online software, none of which seems obviously worse than the kind of software being created today. And, contrariwise, the biggest missing piece in 2005, a good Open Source tool for animation to replace Flash, is still missing now. What do we do nowadays when we want to mix images, sound and video in an interactive interface? Some of the missing functionality can be done with Javascript, though we are still missing a good authoring tool to match what Flash used to have — in particular a tool that allows people with little skill to do a great deal. Using React/Relay/GraphQL requires a tremendous amount of skill.

(There also used to be the argument, put forward by folks like Phillip Greenspun, that Flash did not belong on the Web, because the Web was fundamentally about text and images and HTML. I read Greenspun’s book in 1998, and I strongly agreed with him then and for many years following, but we have to admit that debate is now long dead, along with ideas such as semantic pages. The way people use Javascript now is in the spirit that Flash was once used.)

I don’t mean to overly focus on Javascript, as there have already been many good essays raising doubts about the direction of the current Web. I want to discuss 4 examples where I see power-grabbing and hubris undermining the productivity of businesses. These 4 examples are very different, and you might feel that I’m talking about completely unrelated subjects, but I feel there is one underlying trend here, which is elite programmers wanting more power for themselves, and so taking work away from less skilled workers, and giving it to themselves. This is exactly the opposite of what’s good for business, though it is good for particular programmers.

On an individual level, I freely admit I’m guilty of this. Sometimes a co-worker asks for advice, and I give them advice, and they still can’t figure out how to deal with their problem. So I get frustrated with their lack of skill, so I take over their task and do it myself. That gets the job done in the short-term, but in the long-term that is an unhealthy habit. In the long-term, it is better if the most experienced people can teach the less experienced people, so in the future the less experienced people will be able to solve their own problems, without needing any help.

However, I’m not going to talk about individual cases of specific computer programmers giving themselves too much work. My focus here is only on cases where I see widespread, structural problems that are effecting all of business.

I’ll offer 4 examples:

1. security

2. version control

3. user interfaces

4. staffing integration

In the title of this essay I’m putting all of the blame on elite software developers, but I’m being a bit unfair. If they were behaving badly, without any support from their managers, then they would be fired. In every case I will discuss, top management is complicit. I think what often happens is the top programmers say “We could do this thing, and it would be amazing” and the managers are desperate for a silver bullet that will solve their problems, so they don’t ask any tough questions, they just go along with the programmers. In other cases, especially computer security, I suspect management is the source of the problem, and the computer programmers are merely complicit. There is plenty of blame to go around to everyone.

I worry that some people will think I am contradicting myself. In How To Destroy A Tech Startup In Three Easy Steps I complained that management didn’t listen to the tech team enough. Am I now complaining that management listens to the software developers too much? Not exactly. I’m saying that sometimes management wants something magic to happen, so that all their problems can vanish like a puff of smoke, and some software developers then say “I am a sorcerer, and I can make magic happen.” Management listens when software developers are saying exactly what management wants to hear. That is not the kind of “listening” that I want to encourage. I think businesses would be healthier, in the long-term, if we could have conversations that are a lot more honest than that.

Automation is generally good and productivity gains, when they are real and not a short-term illusion, are wonderful and deserve to be celebrated. Each of the 4 areas are important, and anything that can bring real benefits is a win for the company. But what I see, over and over, are two anti-patterns:

a.) a bandaid that disguises the issues rather than forcing management to confront it

b.) a new procedure or technology that shifts power into the hands of elite software developers

So let’s go through the 4 areas that mentioned above:


Let’s talk about EquiFax. They were hacked and data regarding 145 million people was leaked. When the CEO was hauled before Congress to explain himself, he emanated a nonchalance that offended people. John Oliver had a nice take down:

After this disaster was well known to the public, EquiFax hired ReliaQuest to manage their server security. I have friends who work at ReliaQuest, and I know it is a great company full of great people. If you need a company to simply watch your servers and warn you about intrusions, ReliaQuest is a great choice. All the same, one can not outsource one of a company’s core functions, and for a firm that deals with sensitive financial data, security needs to be a core function. It is reasonable for EquiFax to outsource the janitorial service, but not the management of data.

I’ve been researching EquiFax. As near as I can tell, hiring ReliaQuest is the main thing they’ve done to improve security. Perhaps it is the only thing.

If a company handles people’s sensitive financial data, then I would like the CEO to be the type of person who wakes up in the morning thinking about security, goes to sleep at night thinking about security, and never has security far from their mind during the day. So to hire a security company, and then act as if security is a solved problem, is troubling. There are many other ways for a company to be hacked. Social engineering is a danger, and most company hacks are inside jobs. Hiring a firm such as ReliaQuest does not protect you from having one of your own employees steal data and sell it to the Russians. Protecting against internal attacks requires hard thinking by the top leadership of the company. The job can not be outsourced.

But I don’t mean to only focus on EquiFax. I’ve seen many small companies where computer security was considered the exclusive job of the tech team. I recall a jewelry manufacturer in Richmond, Virginia, which had about 100 people, including a tech team of 3. Top management of such a company has the option to educate everyone about the importance of security, or they can just leave the task to the tech team. The tech team is often happy to gain the power granted by being in charge of such an important function. And then they implement silly rules, like forcing all passwords to change each week — minor rituals that annoy a lot while offering little real security. Real security could only come from educating the staff about the open nature of email, the importance of using encrypted communications, the importance of protecting the intellectual property of the firm. A company with 97 ignorant people and 3 security minded people can never be as secure as a company with 100 security minded people.

Version Control

I’ve been writing code professionally since 1999. The first few years, I didn’t work at a company that used version control. Colin Steele mentioned Subversion to me in 2005. I used it till 2011. Since 2011, every company I’ve worked at has used Git.

From 2005 to 2011 we used Subversion to keep track of all projects. This was at both startups that I helped run, and also while doing most client work. During these years, I only had one major client who didn’t use version control (M Shanken, who put out magazines like

Most of the non-technical people I worked with thought that Subversion was fun. Most were working on Windows machines, and they would use TortoiseSVN as their Subversion client. TortoiseSVN has bright colors and buttons, a fun interface that gave it some of the advantages later enjoyed by apps like Slack.

While we used Subversion, everyone on the team was able to check stuff into version control. Mangers, artists, data analysts, the QA team, everyone used Subversion. They treated it as an infinite undo, which thrilled them. They also found it useful for tracking down when a bug was introduced into the code. I recall the graphic designer felt empowered when she realized an image had overwritten another image with the same name, and she was able to reach into Subversion and get the old image. She felt empowered, because she was able to fix problems on her own.

Subversion has some problems, but a lot can be forgiven in software that is so easy to use that everyone on staff enjoys using it. I am frustrated that so many leaders fail to see the importance of this.

Here are some minor failure modes I’ve seen with Git:

1. a branch that stays open for many months, perhaps even a year (for instance, at Maternity Neighborhood)

2. data is erased for good because someone makes a mistake while using rebase

3. a developer introduces a new bug while trying to resolve a merge conflict

4. widespread but fine-grained cherry picking leaves the team unclear about what’s been merged and what has not been merged

5. a developer makes a change in the wrong branch because they forgot what branch they were in

6. a developer is unable to recover a stash because they forgot where they were when they created the stash, or they simply forget that they have work in a stash

7. developers confused by working in an unattached commit, after a botched attempt to revert

8. a developer feels the need to delete the repo from their harddrive and clone it again, because the whole repo got into a state that they seemed unable to resolve

9. the “blame” command is nearly useless — maybe its because we never know in which branch a given change was made, finding who made a mistake is very difficult

10. developers get confused about which branch will be deployed (I see this especially in shops that have lots of repos for lots of small apps, and use different Git workflow strategies for the different repos, often because different programmers or different teams control the different repos)

11. developers push their changes to “origin” but forget to push to “upstream” or vice versa.

But all of that stuff is trivial compared to the major flaw:

Graphic designers, writers, HTML/CSS frontenders, managers, data analysts and QA staff can’t use Git, even though they all used Subversion.

(I am being unfair by picking on Git like this — I could write a similar list about Mercury or Bazaar, though Mercury is much better about keeping track of which change was made on which branch, and Bazaar was much better about tracking what items had been cherry-picked.)

There are a lot of problems with Subversion. I won’t list them all here. The ideal version control system does not exist. But Subversion gets a few big things right:

1. there is no doubt what the canonical repo is

2. there is no doubt what the canonical branch is

3. all merge conflicts can be resolved with “accept theirs” and “accept mine”

4. the “blame” command is easy to use, so it is easy to figure out who made a mistake. This can be useful for educational purposes, or if you need to justify firing someone.

5. the “revert” command is simple and straightforward and does exactly what you expect it to. When non-technical staff make a mistake, they can easily revert their own mistake.

6. Graphic designers, writers, HTML/CSS frontenders, managers, data analysts and QA staff can use it. I’ve worked with many who thought Subversion was fun.

When I list these complaints for developers, most of them respond “You are complaining about Git’s power. The stuff you list isn’t really a flaw, rather those are all examples of how amazing Git is. It is flexible enough that you can do almost anything with it.”

I agree, Git is amazing and very powerful. What I’m suggesting is that we should recognize that it has a very high cost. It might empower complex workflows for sophisticated teams of experienced computer programmers, but it exiles the rest of the staff, and this has significant productivity costs. And Git is intimidating, not just to non-technical staff, but also to inexperienced programmers. In How To Destroy A Tech Startup In Three Easy Steps I talk about Sital, and his unwillingness to commit things to Git. He was learning a great deal about many other technologies, and he didn’t have any spare energy to learn about Git. He went a month without making a commit, and then he only did so because I insisted. After I put a lot of pressure on him, he got to the point where he would make one commit a day, at night, when he was stopping for the day. He would commit to the master branch, because he was confused how to handle different branches. When there was a merge conflict, I would resolve it for him. We worked together for 6 months, and in that time he learned a great deal about a lot of important topics, but he never really learned how to use Git, because it was a low priority, for both him and our CEO.

Git is very powerful? I’m willing to go along with that line of thought so long as we all understand that using a tool that is more powerful than needed can lead to problems.

Git was created by Linus Torvalds to help him manage the development of Linux. It is designed for a situation where thousands of volunteers are committing work, which will be reviewed by Torvalds’s lieutenants, who will decide if Torvalds should see the code. Much of the code is rejected. In such a situation, it makes sense that developers should work in their own repo, rather than being given access to the repo controlled by Torvalds. Git does not enforce a canonical repo, rather, you can easily fork a repo and your new repo might become canon for whoever likes your fork more than they like the repo that you forked from.

I have never worked at a company that had the same needs as Torvalds. Never. I’ve never worked at a company that sponsored non-canon development. Every company I’ve worked at has a canonical repo for each app that is under development (or multiple apps in one canonical repo). I have worked at companies that implemented very complex Git strategies. For instance, when I was at they insisted that every developer create their own repo for each app, and do code review on their own repos, and then after code review push their branch to the canonical repo and open a pull request. While this was all possible, and we all followed the rules, there was no gain. It was a lot of rituals and complexity without any benefit. Occasionally we would each make some minor mistake, such as pushing to “origin” but forgetting to push to “upstream”, and then telling the QA team that they should test our new branch, and the QA team replying with “We can’t find your new branch, are you sure it exists?” because of course they are looking at “upstream” and we only pushed to “origin”. Minor, but annoying, and what were we gaining for this extra trouble? The funny thing is, the company still had repos that were clearly canonical. We weren’t building Open Source software. We weren’t Torvalds. We were a company that had to deploy proprietary software to servers that we controlled. We gained nothing from the distributed nature of Git.

Nearly every place I’ve worked would have been better off with Subversion, as it would have allowed smoother workflows integrating the work of the whole team, in particular the non-technical staff (graphic designers, QA team, HTML/CSS frontenders).

Over the years, I’ve learned this rule: branches are for computer programmers. Non-technical staff tend to be terrified of branches. And this tends to make Git off-limits to non-technical staff. Git is strictly for computer programmers, and thus it helps contribute to the trend that I’m warning about, of too much power accumulating in the hands of the software developers, when a healthy workflow would spread as much of the work as possible to less skilled staff.

User Interfaces

Just to offer some perspective about where I’m coming from, I’ll start by saying that I think of these as tools that made it easy for beginners to be productive:


Visual Basic (obviously I mean the classic versions, before .NET. The stuff in the 1990s was genius)

Adobe Flash

Adobe Dreamweaver

(Of these links, I especially recommend the one about Hypercard.)

I spent 1995 learning HTML and putting together simple websites. I was thinking this is something I’d like to do professionally. In 1996 my father and I went to a demo in New York City, where Adobe was introducing PageMill, their Web page creation software. My father was impressed. My dad wasn’t in the tech industry, but he was pretty smart about technology trends. He said, “Anyone who knows HTML just lost their job. This will replace the need to know any of the underlying technologies.” But that turned out to be wrong. Even now, in 2017, most companies building Web software still rely on individuals to hand-write their frontend code. Indeed, from the point of view of business productivity, everything has been going in the wrong direction.

This category of software, creating Web pages, was eventually taken over Dreamweaver. The software came from Macromedia which was later bought by Adobe. In many ways, Dreamweaver is a great piece of software. It makes it easy for beginners to get started, and it let’s them do some sophisticated things. But it ran into some limits. Many of the limits have to do with HTML, and the fact that HTML is extremely fragile in regards to the how elements are placed on a page. An example:

<div id=”member_benefits”>
<ul class=”benefit_types”>
<li><a href=”/basic”>Basic</a></li>
<li><a href=”/professional”>Professional</a></li>
<li><a href=”/enterprise”>Enterprise</a></li>

I have not used Dreamweaver in many years, but I can speak of some of the problems that used to occur. Suppose you want to rearrange the list, with Enterprise on top. In Dreamweaver, you could click and drag. Sometimes this works fine. Other times, the LI element ends up outside of the UL. Other times Dreamweaver creates a new LI for the A tag and its text, while leaving behind an empty LI tag. So you would end up with 4 LI elements in a list that only has 3 options.

The problem isn’t with Dreamweaver, the problem is with HTML. It was originally meant to be a markup language for describing the structure of a document, but it later became the language people used to build graphical interfaces for most apps going over TCP/IP. Markup languages are a terrible choice for graphical interfaces. As much as possible, when creating a graphical interface, you want to use a declarative system. Ideally, a declarative configuration file can specify your entire graphical interface. The entire tech industry is largely in agreement on this. Consider that Quark Express and Adobe InDesign do not use markup languages, internally, to specify the graphical layout of the documents that they create. Also consider that before Oracle bought Sun and killed JavaFX, JavaFX was going in a very exciting direction, with a highly declarative system for specifying graphical interfaces. Also consider that every web framework ever invented has systems for specifying a graphical interface in a declarative format. If you have used Ruby On Rails or Symfony (PHP) or Grails (Groovy) or nearly any other Web framework, you are aware that you can build a lot of the graphical interface by specifying configuration data in a YAML file. But you are probably also aware that all of these systems fail at some point, and they fail because of the limits of HTML.

If we had browsers that read configuration files and rendered them as graphical interfaces, without having to translate them into HTML, then the above HTML sample could be re-written as something like (I’m going to use EDN notation, but you could use anything):

{ :id :member-benefits }

{ :id :basic, :target “/basic”, :content “Basic”, :mime-type “text/plain” }

{ :id :professional, :target “/professional”, :content “Professional”, :mime-type “text/plain” }

{ :id :enterprise, :target “/enterprise”, :content “Enterprise”, :mime-type “text/plain” }

:type :class
:behaviors [ :link ]
:parent :member-benefits
:members [ :basic, :professional, :enterprise ]

It is important to note that I could rearrange these items without effecting the output. A config like this could be made to work regardless of the order in which it is listed, with a freedom we could never have with HTML, or any other markup language. This could work the same as any other ordering of the elements:

:type :class
:behaviors [ :link ]
:parent :member-benefits
:members [ :basic, :professional, :enterprise ]

{ :id :professional, :target “/professional”, :content “Professional”, :mime-type “text/plain” }

{ :id :member-benefits }

{ :id :basic, :target “/basic”, :content “Basic”, :mime-type “text/plain” }

{ :id :enterprise, :target “/enterprise”, :content “Enterprise”, :mime-type “text/plain” }

(If you don’t like my notation here, you can imagine something similar using any of the interface building tools provided by systems such as Ruby On Rails of Symfony, but these systems could go further if they didn’t need to be translated into HTML.)

You can not randomly mix up HTML. The following would produce different results, compared to our first HTML sample:

<li><a href=”/basic”>Basic</a></li>

<ul class=”benefit_types”>

<div id=”member_benefits”>

<li><a href=”/professional”>Professional</a></li>
<li><a href=”/enterprise”>Enterprise</a></li>

It’s the sensitivity to placement that makes HTML so fragile, and the fragility is why tools such as Dreamweaver have had trouble fully automating the creation of graphical interfaces for TCP/IP. If you doubt this, compare Dreamweaver to InDesign. Why is that Dreamweaver is never considered good enough to handle complicated designs, whereas InDesign is certainly considered competent to handle anything a graphic designer can imagine? It’s because InDesign can internally use the best system for creating graphical designs whereas Dreamweaver is forced to use a crippled system that was designed to facilitate the structuring of data rather than the creation of graphical interfaces. If your response to this is “The real problem is that Dreamweaver is creating interfaces that are interactive, whereas InDesign creates static designs” then pretend I re-wrote this whole paragraph, with the comparison being between Dreamweaver and Flash. We still arrive at the same conclusion.

Consider that Tim Berners Lee became frustrated with HTML and moved on to working RDF, which he regards as his true masterpiece. RDF is what you should use when you want to structure data. HTML is a crippled half-aborted attempt to both structure data and provide a graphical interface to it. It clearly failed. It’s inventor moved on to a system that is much better suited to the structuring and manipulation of data. But the tech world has failed to move on from HTML. Instead we have endless workarounds to try to patch its failings.

HTML was created in 1989 and in 2017 it is obsolete. The tech industry should clearly get rid of it. Browsers that consume declarative configuration files would make it easier for companies such as Adobe to create software such as InDesign that could 100% handle the creation of graphical interfaces. And yet instead, the main trend of the last few years has been to use Javascript to build graphical interfaces, while still using HTML as the base graphical language. This has lead to an explosion in complexity. Ten years ago frontenders were typically less experienced, and less well paid, than the backenders who did serious programming work. Nowadays the situation is completely reversed. Nowadays producing graphical interfaces that work over TCP/IP is one of the most complex and demanding forms of computer programming.

While there are some frontenders who enjoy the pay and prestige that frontend work now has, the tech industry should view this situation as a failure. I’ve friends who joke that with React/Relay/GraphQL the Web finally has the functionality of VisualBasic 1.0. It’s funny, but it is also a tragic waste of time and resources, and it is a tragic waste of people’s skill and intellect. In a sane world, graphical interfaces are built by graphic designers who are generally paid less than experienced software developers. The highly skilled, highly paid software developers should be reserved for tasks that are fundamentally difficult, rather than for those tasks that are difficult only because of accidents of history and the unwillingness of the tech industry to modernize.

I’ll repeat my overall theme: as much work as possible should be given to those employees who are relatively low in skill. If a graphic designer can create a great graphical interface, they should be empowered to do so. But for now, the trend in the tech industry is going in the other direction. The ability to accomplish various tasks is increasingly concentrating in the hands of the most highly trained software developers. This is not a healthy trend. While it boosts the pay and prestige of elite programmers, it undermines business productivity, as this situation allows a small pool of employees to act as points of failure. And yes, there will always be some elite employees who can act as points of failure, but an intelligent corporate strategy would work to minimize this.

Why is this happening? I suspect part of the reason is class warfare, particularly the tension between good computer programmers and the companies that employ them (I’m talking about the best 10% of computer programmers here, but I’m excluding the top 0.1% who work for Apple or Google or Facebook). Hypothetically, these top programmers could advance ideas that would make some of their work so easy that less skilled people could do it, but what appeals to them more is automating some of their work, in a manner that keeps the work under their control. The interests of these very good programmers are not the same as the interests of the firms that employ them. But it is the rare manager who has the scope of responsibilities and depth of technical know-how that they could suggest a completely different strategy.

But if good programmers want to adopt technologies that increase their own power and prestige, it is worth noting that the very biggest tech companies (Apple, Google, Facebook) have monopoly interests that overlap with the interests of the 0.1% programming elite. In theory, Apple/Facebook/Google could promote a new declarative language that makes the creation of graphical interfaces over TCP/IP extremely easy. But if their elite programmers say “We’ve come up with this new technology and there’s only 100 people in the world who really understand it” then those top companies have an interest in allowing those technologies to be widely adopted, as those top companies will always have an advantage in those technologies. They can push complex systems that gives them an edge because they have the elite programmers who best understand the complexity of the new systems.

Still, these class tensions can only explain part of the problem. I was astonished that Google (Andy Rubin? Rich Miner?) continued to use markup languages for the interfaces for Android apps. Why would they do that? I suspect the sheer momentum of dead ideas. In every field, the mainstream of thought is slow to adapt to what most awake people are aware of. Thomas Kuhn writes about this in The Structure Of Scientific Revolutions. When I read that book, I was surprised to read that Darwin’s generation of biologists never accepted his theories. It wasn’t until the early 1900s that a younger generation of biologists finally established Darwinism as the dominant paradigm of biology. There is the old joke “Science advances one funeral at a time.” I’m sad to say, the tech industry is equally resistant to modernization.

If we can liberate our minds, we will build better systems. Indeed, we must free ourselves first, before we can help others. Some of this is purely psychological. Only when our hearts are full of a sincere love of ethical craftsmanship do we gain the wisdom to resist the temptations of pridefulness, arrogance, hubris and over-reach. When we are mentally ready to set aside our old bigotries, then we will make progress. Some of the motivation of the elite programmers probably derives from the simple fact that highly skilled people tend to think the world would be a better place if they were in control of everything. This is at least partly hubris, but I certainly understand the feeling. I’ve felt that way many times too. Countless times an inexperienced co-worker has asked for my help, and after a few minutes of trying to advise them, I become frustrated with their lack of skill, so I take over the task myself. The impulse is understandable, though it is ultimately bad for corporate productivity. Highly skilled individuals need to delegate as much as possible, so they can focus on those tasks that no one else can do.

Staffing integration

Back in the 1970s, when my mother was in graduate school, she studied computer programming so she could build simulations of the urban transportation issues that she was researching. Her professor was a woman. At the time, that wasn’t especially surprising. If you’ve seen movies like Hidden Figures, you are probably aware that the computer industry was initially welcoming to women (at least, relative to other industries, and probably more so for whites — while the movie focuses on the difficulties the women face, at end of the movie, the women have all made giant strides into the new profession).

Over the last 30 years, women have been slowly, but relentlessly, pushed out of the computer industry. Young women have responded rationally by avoiding a profession where they know they are not wanted. You can read a good review of the percentage of women studying the subject in Women computer science grads: The bump before the decline

The article says:

1986 was a good year for women in IT. In may of that year, 15,129 female students graduated with a bachelor’s degree in computer science in the U.S., according to the U.S. Department of Education. Unfortunately, that was the high water mark — and the beginning of a precipitous decline….

So why did the relative number of women choosing computer science as a baccelaureate major rise so sharply between 1971 and 1986, only to stall and decline so steadily and steeply over the next 25 years? What accounts for the bump? With the percentage of women graduates in computer science at a 39-year low it’s a question that still lacks a definitive answer

In the 1960s and 1970s the computer industry was more open to women than the legal profession or the medical profession. But then in the 1990s something changed. Women continued to flow into the legal and medical profession, but the tech industry became the only major profession where women’s participation was in retreat. By 2017, 50% of all new doctors in the USA are female, but among computer programmers, women have been reduced to 26%:

In 2013, only 26% of computing professionals were female — down considerably from 35% in 1990 and virtually the same as in 1960.

Hopefully, I don’t have to explain how it undermines business productivity, when good people get pushed out.

It is important to realize the tech industry is somewhat unique. It’s growing resistance to women goes against the trends in most other industries:

Using the National Science Foundation’s SESTAT data, we examine the gender wage gap by race among those working in computer science, life sciences, physical sciences, and engineering. We find that in fields with a greater representation of women (the life and physical sciences), the gender wage gap can largely be explained by differences in observed characteristics between men and women working in those fields. In the fields with the lowest concentration of women (computer science and engineering), gender wage gaps persist even after controlling for observed characteristics. In assessing how this gap changes over time, we find evidence of a narrowing for more recent cohorts of college graduates in the life sciences and engineering. The computer sciences and physical sciences, however, show no clear pattern in the gap across cohorts of graduates.

This subject gets a decent amount of discussion in tech magazines and tech forums. If you run a Google search, it is easy to find an abundance of personal stories:

My daughter traveled with me to DrupalCon in Denver for “spring break”, attended the expo at OSCON 2012, and even attended and watched me moderate a panel at the first Women in Advanced Computing (WiAC ’12) conference at USENIX Federated Conferences Week. Thanks to my career, my daughter’s Facebook friends list includes Linux conference organizers, an ARM developer and Linux kernel contributor, open source advocates, and other tech journalists. My daughter is bright, confident, independent, tech saavy, and fearless.

When my daughter got home from the first day of the semester, I asked her about the class. “Well, I’m the only girl in class,” she said. Fortunately, that didn’t bother her, and she even liked joking around with the guys in class. My daughter said that you noticed and apologized to her because she was the only girl in class. And when the lessons started (Visual Basic? Seriously??), my daughter flew through the assigments. After she finished, she’d help classmates who were behind or struggling in class.

Over the next few weeks, things went downhill. While I was attending SC ’12 in Salt Lake City last November, my daughter emailed to tell me that the boys in her class were harassing her. “They told me to get in the kitchen and make them sandwiches,” she said. I was painfully reminded of the anonymous men boys who left comments on a Linux Pro Magazine blog post I wrote a few years ago, saying the exact same thing.

And also:

And yet, here it is, the year 2010, and my female friends and I are still being insulted, harassed, and groped at at open source conferences. Some people argue that if women really want to be involved in open source (or computing, or corporate management, etc.), they will put up with being stalked, leered at, and physically assaulted at conferences. But many of us argue instead that putting up extra barriers to the participation of women will only harm the open source community. We want more people in open source, not fewer.


My wife is an IT manager. She knows Unix, can code, debug nasty windows problems, and gets shit done. Still, coworkers, new hires, and people who don’t know her try to address her male subordinates during escalated issues assuming the men are the ones who know what they’re doing.

Women now make up 50% of all new doctors and more than a third of all new lawyers, but there are actually less women graduating with advanced degrees in computer science than there were in 1989. Women are clearly willing to work extremely long hours at work that is intellectually demanding, so as to get into high paying professions. But they are not willing to make the effort for a profession that doesn’t want them.

Is this trend an example where the arrogance or power-grabbing of elite programmers are a problem? Only the passage of time, and some in-depth studies, can answer that question, but we can say that there are some programmers who certainly use their influence for evil, and it is up to the rest of us to try to balance out the pressure they exert.

No doubt the retreat of women from tech is a complex and multi-variate trend, whose driving forces can not be fully understood at the moment. So what can we do right now? We can call out those specific instances where computer programmers have behaved badly. James Damore and Eron Gjoni are exemplars of the worst kind of behavior in the profession, so I’ll talk about them for a moment.

Programmers such as James Damore can do a lot of damage. Here is a guy who could have used his position for good, but he instead decided that his time and energy were best spent fighting to make the tech industry even more distorted than it already is. As I mentioned in If a programmer confuses the average for the 1% they deserve to be fired, even with exaggerated assumptions about the inherent weaknesses of women, we still end up with a Guassian distribution, where the fringe is still plenty enough for every programming job in the world:

Suppose there was overwhelming evidence that 95% of women were terrible at technology and 5% of women were awesome at technology. There are roughly 7 billion people on the planet, roughly 3.5 billion women, roughly 1.5 billion women who work outside the house for a wage. In this scenario, where only 5% of women love technology, there are 75 million working women who are awesome at technology. According to the Bureau Of Labor Statics, the USA had 1,256,200 software developers in 2016. The BLS also tracks some other minor categories, such as Web Developer, which have about 150,000 jobs. Lump all the sub-categories together, and let’s say there are 2 million such jobs in the USA. Let’s be wildly generous and double the number for the EU, and triple it for Asia. That gives 12 million software developer jobs in all of the advanced and developing economies. So even with exaggerated assumptions about women’s inherent weakness in technology, we still end up with a scenario where every single programming job in the world can be filled by a woman who will be awesome at the job. There is no need for men, at all, in the tech industry.

Obviously I don’t want to see a world where men are chased out of the technology industry, but we are currently facing the opposite situation, so I’ve adopted strong rhetoric to try to provide some balance.

One of my pet peeves is people who indulge in lazy just-so “evolutionary psychology” arguments and yet they still believe themselves to be hyper-rational. James Damore seems to fall into that particular vice. In his essay, he constantly references studies that found some interesting preference for women that was different from men, then he treats the mode of that distribution as if it applied to all women, as if Google routinely hired the mode of any distribution. Does anyone think that Google hires the mode of the nation’s IQ distribution? Damore deserved to be fired for bad math.

And of course, Eron Gjoni is not an elite programmer, but he also is not a credit to the profession. While his crusade against Zoë Quinn might not be the norm, the fact that it started a movement suggests there is some pent up rage looking for an excuse to express itself against women.

There is also a strange dishonesty in the rhetoric that surrounds this issue. On a variety of forums, it is common to see people spew viciously misogynist language, and then deny that their words are viciously misogynist, and when others call out their behavior, the initial commenter will complain they are being censored. This comment sums up some of the rhetorical confusion:

I don’t think even Orwell predicted the Newspeak-y confusion over censorship that has become so ubiquitous — “my freedom of speech is infringed upon unless people with views I dislike are made to remain silent; it is censorship when others express certain views”.

What is the difference between the 35% of 1990 and the 26% of recent years? Accepting the rough 2 million figure, you have to assume that at least 180,000 women are missing from the tech industry. But that implies that the 35% was somehow the magically perfect number. If you assume that the tech industry should achieve gender parity, as the medical profession has, then there are almost 500,000 women missing from these technical jobs. Therefore this issue is much more urgent than the other 3 issues that I mentioned above. The productivity hit from missing out on 180,000 (or 500,000?) talented people, is much more significant than the damage I see from having the wrong philosophy regarding security, or version control, or user interfaces.

Some might argue that we should refrain from action on this issue until more studies are done. But the scale of the distortion to the labor market, and the implied loss of productivity, is large enough that we should address this issue with urgency. We might not understand every nuance of the situation, but we can certainly take action against the worst abuses. We can respond to the situations that have been described by countless individuals, of which I offered an infinitesimal sample above. We have a moral obligation to push back against what abuses we see around us.


Again, I’ve concerns with a general trend, and I offered 4 examples:

1. security

2. version control

3. user interfaces

4. staffing integration

Both top management and top computer programmers need to re-think their positions. It is understandable that computer programmers might enjoy the increase in power and prestige that comes from a change of process or technology that makes them even more necessary than before, but it is bad for business productivity. As much as possible, elite staff need to delegate, and that includes elite computer programmers — as much as possible, we want them to delegate work to others, and we want the leadership to support them in this. And both the leadership and the programmers need to think carefully about the real cost of adopting a technology or process that makes it impossible for less skilled staff to do work.

Post external references

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25