Relying on “standard” libraries which may or may not exist on a server

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at:, or follow me on Twitter.

A strange post from ItsMe:

Programs (like not only browsers) are using shared libraries (e.g. glibc provided by the OS). So the dependant library(s) isn’t/ aren’t used by the browser alone.

most unix based OSes have python installed as a standard library (next to perl).

(My RHEL for example depends heavily on python because most of the standard programs that are used are written in python)

So why should be almost 8 MB for a program that reads text be nothing?

Particularly when you can use some standard libraries allready installed to your computer?

In my early days in IT my Instructors, when they looked at my written code/ tools), told me all the time:

Nice code – but thats allready part of the standard library

From this I have my point of view that says why invent the wheel over and over again?


RESULT=”`wget -qO-`” && echo $RESULT | hxnormalize -l 300 -x | hxselect -s ‘\n’ -c “td.title” | hxremove span | sed -e ‘s/<[^>]*>//g’

This oneliner does the basic part of the 7.8MB sized GO program. It fetches the headlines from Hacker News and prints it on your console.

Change the sed command a bit to leave the URI’s in the output. A few lines more of bash and you can reach almost the same features like the GO program.

I am surprised that a programmer would want to depend on standard libraries which may not be there when a programmer needs them. The idea that you should rely on standard libraries made sense when projects tended to live on 1 server, but nowadays it is common to deploy to dozens (or hundreds) of servers. Relying on libraries on the servers has meant that we now need to ensure that certain libraries are on every server. This has lead to new kinds of complexities. Consider the explosion of tools that manage the servers for us: Supervisor, Chef, Puppet, Ansible, etc. Some languages (Ruby in particular) considered it a strength to rely on Unix (consider the post “I love Unicorn because it is Unix” which is a pure expression of this idea). But relying on Unix has forced us to also rely on the tools that enforce standardization of the servers for us. Ruby was released to the world in 1995, when most server software still ran on just 1 machine. What made sense for that world does not make sense in a world where a 100 servers need to be spun up.

On Hackers News many people pointed out the benefits of having all code bundled together.

I agree with this:

“but dependency-free single-binary deployments are very, very convenient”

On a semi-related topic, my first non-trivial exposure to programming on the JVM was through Clojure, where it is common to use the Leinengen build/dependency manager, and which allows the creation of an uberjar, which offers what you mention, “dependency-free single-binary deployments”.

Then I started doing some actual Java development, and I was surprised that uberjars are somewhat uncommon among Java projects. There is a plugin for Maven that allows the creation of uberjars, but many projects use complicated application servers and the co-location of needed jars — a system that seems complicated and fragile. I ended up using Buildr as my build system, but it doesn’t seem to offer an uberjar option, at least nothing as convenient as what is offered by Leinengen. Given the convenience of uberjars, I would think they would be more common Java projects. Apparently they are avoided because so many Java projects are huge (with many thousands of classes) and so a common uberjar pattern (of exploding all dependencies to the same level) runs into the problem of name collisions. Which brings up another issue relevant to Go and Clojure, the small size of many apps, the ease of going with a micro-services architecture, greatly increases the ease of deployment as well.

Post external references

  1. 1
  2. 2