The good and the bad of data-types

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com, or follow me on Twitter.

Interestingly honest to hear someone admit that types are important but not a cure all — this is my few as well:


Not distinguishing abstraction from checking

This is my number-one beef. Almost all languages offer data abstraction. Some languages proceed to build a static checking system on top of this. But please, please, don’t confuse the two.

We see this equivocation used time and time again to make an entirely specious justification of one language or other. We see it used to talk up the benefits of type systems by hijacking the benefits of data abstraction, as if they were the same thing.

The discussion of “stringly-typed programming” (sic) at Strange Loop, both in the Snively/Laucher talk and in Chris Morgan’s Rust-flavoured talk, was doing exactly this. Yes, it’s true that HTTP headers (to borrow Morgan’s example) can and should (usually) be abstracted beyond plain strings. But failing to do so is not an omission of compile-time type checking. It’s an omission of data abstraction. Time and time again, we see people advancing the bogus argument that if you make the latter omission, you must have made the former. This is simply wrong. Type checkers are one way to gain assurance about software in advance of running it. Wonderful as they can be, they’re not the only one. Please don’t confuse the means with the ends.

Pretending that syntactic overhead is the issue

The Sniveley/Laucher talk included a comment that people like dynamic languages because their syntax is terse. This might be true. But it made me uncomfortable, because it seems to be insinuating a different statement: that people dislike type systems only because of the syntactic overhead of annotations. This is false! Type systems don’t just make you write annotations; they force you to structure your code around type-checkability. This is inevitable, since type checking is by definition a specific kind of syntactic reasoning.

To suggest that any distaste for types comes from annotation overhead is a way of dismissing it as a superficial issue. But it’s not superficial. It’s about the deep consequences type checking has on how code must be structured. It’s also (perhaps ironically) about polymorphism. In a typed language, only polymorphism which can be proved correct is admissible. In an untyped language, arbitrarily complex polymorphism is expressible. Rich Hickey’s transducers talk gave a nice example of how patterns of polymorphism which humans find comprehensible can easily become extremely hard to capture precisely in a logic. Usually they are captured only overapproximately, which, in turn, yields an inexpressive proof system. The end result is that some manifestly correct code does not type-check.

Post external references

  1. 1
    http://www.cl.cam.ac.uk/~srk31/blog/2014/10/07/
Source