Sick politics is the driving force of useless ceremony

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com

This is part 7 of a 12 part series:

1.) Quincy’s Restaurant, a parable about concurrency

2.) Why I hate all articles about design patterns

3.) Clojure has mutable state

4.) Immutability changes everything

5.) Mutable iterators are the work of the Devil

6.) Get rid of all Dependency Injection

7.) Sick politics is the driving force of useless ceremony

8.) Functional programming is not the same as static data-type checking

Interlude

9.) Inheritance has nothing to do with Objects

10.) Is there a syntax for immutability?

11.) Immutability enables concurrency

12.) Quincy’s Restaurant, the game


Good code communicates the essence of its task

Good code does not hide its task behind a slab of boilerplate code whose only reason for existing is to satisfy some complex syntax requirements of the language. Stuart Halloway said this beautifully:

Good code is the opposite of legacy code: it captures and communicates essence, while omitting ceremony (irrelevant detail). Capturing and communicating essence is hard; most of the code I have ever read fails to do this at even a basic level. But some code does a pretty good job with essence. Surprisingly, this decent code still is not very reusable. It can be reused, but only in painfully narrow contexts, and certainly not across a platform switch.

The reason for this is ceremony: code that is unrelated to the task at hand. This code is immediate deadweight, and often vastly outweighs the code that is actually getting work done. Many forms of ceremony come from unnecessary special cases or limitations at the language level, e.g.

factory patterns (Java)

dependency injection (Java)

getters and setters (Java)

annotations (Java)

verbose exception handling (Java)

special syntax for class variables (Ruby)

special syntax for instance variables (Ruby)

special syntax for the first block argument (Ruby)

High-ceremony code damns you twice: it is harder to maintain, and it needs more maintenance. When you are writing high-ceremony code, you are forced to commit to implementation approaches too early in the process, e.g. “Should I call new, go through a factory, or use dependency injection here?” Since you committed early, you are likely to be wrong and have to change your approach later. This is harder than it needs to be, since your code is bloated, and on it goes.

Let’s look at 2 examples of FizzBuzz.

Here is an example in Java:

public class FizzBuzz {
  public static void main(String[] args) {
    for(int i = 1; i <= 100; i++) {
      if (((i % 5) == 0) && ((i % 7) == 0))
      System.out.print("fizzbuzz");
      else if ((i % 5) == 0) System.out.print("fizz");
      else if ((i % 7) == 0) System.out.print("buzz");
      else System.out.print(i);
      System.out.print(" ");
    }
    System.out.println();
  }
}

And here is an example in Clojure:

(doseq [n (range 1 101)]
  (println
    (match [(mod n 3) (mod n 5)]
      [0 0] “FizzBuzz”
      [0 _] “Fizz”
      [_ 0] “Buzz”
      :else n)))

Which code is more direct to its task? Which code comes closer to communicating the essence of its task?

The great sadness of Object-Oriented Programming is the waste of brilliant minds

Incredible cleverness has been invested in trying to overcome the weaknesses of Object-Oriented Programming. For instance, Sandi Metz gave a talk at RailConf 2015, in which she gave an example of using Composition and Dependency Injection to solve a problem. She is aware that her approach involves a lot of bloat, and she even jokes about it in one of her slides:

“More code, Same behavior” is exactly what every developer hopes to avoid. She says, “We love Dependency Injection,” apparently unconcerned by the increase in complexity. She says the end result is “pluggable behavior,” which is certainly a good thing but which we should all hope to achieve in the simplest manner possible. She makes clear that what developers should want is this:

def order(data)
data.shuffle
end

But, in her example, what they end up with is this:

class RandomOrder
def order(data)
data.shuffle
end
end

These two extra lines of code,

class RandomOrder

end

are the penalty she pays for using Object-Oriented Programming. These extra two lines of code are a bit of useless ceremony. We know it is always better to have three lines of code, instead of five, for doing exactly the same thing; therefore we must ask, Wouldn't our lives be better if we got rid of all of the Object-Oriented cruft and simply had functions that could work on our data? Of course, this line of reasoning leads us to the conclusion that the Functional paradigm might be a better path forward than the Object-Oriented paradigm. In his talk, “Why Clojure is my favorite Ruby,” Steven Deobald refers to this as “Navigating disappointment.” (Check out Deobald's talk, “Why Clojure is my favorite Ruby”.) That is, many brilliant developers engage in tremendous cognitive struggles to navigate around the weaknesses of Object-Oriented languages.

Consider the overall lesson that Metz wants us to learn:

Is there a simpler way? Can we get "pluggable behavior" with less of a conceptual burden?

What does it mean to say that a language is “low ceremony”?

That is a subjective call, and even for a particular software developer, one’s opinion will change over time. My opinion has certainly changed. I recall when I first discovered Ruby On Rails, back in 2005. It had tools that would auto-generate my model classes for me, which I thought was very cool. More so, the model classes were simple. If I had database tables for “users”, “products”, “sales” and “purchases”, the tools would probably generate 4 files for me, which would initially look like this:

The “user.rb” file:

class User

end

The “product.rb” file:

class Product

end

The “sale.rb” file:

class Sale

end

The “purchase.rb” file:

class Purchase

end

At the time, I thought, this was fantastic. I didn’t have to write getters and setters; that was implied. My limited experience with Java had convinced me — Java was verbose! For Java, I had to write every function to get and set a variable. How tedious! (Even PHP, a script language, forced me to write getters and setters, just like Java.) Ruby saved us from all that! Ruby was a miracle!


But the present constantly rewrites the meaning of the past. The years go by and we see things differently. I started working with Clojure, and then I came back to Ruby and I was astonished to realize that it now seemed high-ceremony to me. Look at those 4 empty classes! That’s 4 files, and 8 lines of code, that are mere boilerplate — they don’t help me with my task, I just have to write it so that when method_missing is called it will have some context to work with. What seemed low-ceremony to me, in 2005, struck me as high-ceremony by 2011.

As we've seen before, we can get all the safety we need, and much more flexibility, with much less effort, by allowing our vars to be accessible such that we can run any functions on the data inside that var. To the extent that we need safety, we can achieve it be ensuring that the function or functions which update our vars all enforce the right contract. We can hide vars if we need to, or make them global, and we can one function, or a 100, that operate on that data. So long as we have the right contract in the right place we will have the safety we need. And so long as we don't make an ideology of "state and behavior should be bound together" then we can avoid a massive amount of complexity.

Ceremony can not replace leadership

In response to "avoid useless ceremony" some developers respond with the counter-argument "We need some ceremony to keep my team organized". An example from Hacker News:

I really liked using Sinatra until I used it with a team. The lack of ceremony and structure (as you put it) killed us. The app started out as just a simple service, so why not use Sinatra right? Then things changed and we were more focused on the web side, but stuck with Sinatra because everyone had heard the FUD about rails.

Each person has a preference on where they thought something should be. Days were spent arguing over the location of mundane things. Each team member had a different understanding of REST, so without the rails routing we argued about what we thought an ideal API would look like. We argued about which view engine we should introduce. We argued about how to handle our JS/CS. We argued some more about routes and how to make them more discoverable. On and on. Not intentional, but something new would come up and we would need to figure out where to put it.

This, more than anything, hammered home for me the strength of rails. Rails is great at getting you up and running, but it's greatest strength is that it's idioms are well documented. That gives developers a strong impression of how things are going to be and where things go. This saves you so much time by not having to argue and reach consensus on every little thing.

How to keep a team organized is a completely reasonably concern. But where is the leadership of your team if "Days were spent arguing over the location of mundane things"? If your team lacks leadership, then you might feel that impulse to outsource some of your decision making to an outside body, such as the team that creates Rails. If your internal politics are sick, then using a good framework allows you to borrow some healthy politics from someone else. But remember, that is a crutch. You and your project will be much healthier, and more creative, if you can work through the internal issues, or make changes in management, such that you end up with healthier leadership.

The crutch will only help you for a little while. Rails can help you get started, even if you have sick leadership, but after a few months you will find yourself having debates over "How to refactor fat models in Ruby On Rails". And then the leadership issues that you previously ducked will come back to haunt you. You can not permanently postpone the moment when you need to make hard decisions. And the best option, always, is to confront your leadership issues at the start of the project, rather than 8 months in.

Distrust of developers is the driving force of useless ceremony

In some sense, sick politics, at the national and even international level, has always been the driving force for useless ceremony. The direction that Java development took is perhaps the best example of this. For there was a political idea that drove Object Oriented Programming to its peak in the late 1990s: the idea of outsourcing. The idea of outsourcing software development rested on some assumptions about how software development should work, in particular the idea of the “genius” architect, backed by an army of morons who could act as secretaries, taking dictation. Ceremony Heavy Object Oriented Programming was the software equivalent of a trend that became common in manufacturing during the 1980s: design should stay in the USA while actual production should be sent to a 3rd World country. Working with UML diagrams, writing code could be reduced to mere grunt work, whereas the design of software could be handled by "visionaries", possessed with epic imaginations, who could specify an Object Oriented hierarchy which could then be sent to India for a vast team to actually type out. And the teams in India (or Vietnam, or Romania, etc) were never trusted, they were assumed to be idiots, and so, for a moment, there was a strong market demand for a language that treated programmers like idiots, and so the stage was set for the emergence of Java.

Facundoolano, who is a huge fan of Python, sums up the distrst that Java has for programmers:

Java design and libraries consistently make a huge effort making it difficult for a programmer to do bad things. If a feature was a potential means for a dumb programmer to make bad code, they would take away the feature altogether. And what’s worse, at times they even used it in the language implementation but prohibited the programmer to do so. Paul Graham remarkably pointed this out as a potential flaw of Java before actually trying the language:

Like the creators of sitcoms or junk food or package tours, Java’s designers were consciously designing a product for people not as smart as them.

But this approach is an illusion; no matter how hard you try, you can’t always keep bad programmers from writing bad programs. Java is a lousy replacement to the learning of algorithms and good programming practices.

The implications of this way of thinking are not limited to language features. The design decisions of the underlying language have a great effect on the programmer’s idiosyncrasy. It’s common to see Java designs and patterns that make an unnecessary effort in prohibiting potential bad uses of the resulting code, again not trusting the programmer’s judgment, wasting time and putting together very rigid structures.

The bottom line is that by making it hard for stupid programmers to do bad stuff, Java really gets in the way of smart programmers trying to make good programs.

The true path to pluggable behavior comes from avoiding needless specificity

Let's go back to the point that Sandi Metz was making. We want pluggable behavior. We want flexibility. We want code re-use. Is there an easier way to get it than the approach she suggested?

Consider Clojure's alternatives to standard Object Oriented approaches:

It ends up that classes in most OO programs fall into two distinct categories: those classes that are artifacts of the implementation/programming domain, e.g. String or collection classes, or Clojure’s reference types; and classes that represent application domain information, e.g. Employee, PurchaseOrder etc. It has always been an unfortunate characteristic of using classes for application domain information that it resulted in information being hidden behind class-specific micro-languages, e.g. even the seemingly harmless employee.getName() is a custom interface to data. Putting information in such classes is a problem, much like having every book being written in a different language would be a problem. You can no longer take a generic approach to information processing. This results in an explosion of needless specificity, and a dearth of reuse.

There are no perfect computer programming languages. There never will be. Every decision in programming involves making trade-offs between one approach versus another. But if there any guide that holds true in all circumstances, that last point might be best: avoid needless specificity. Avoid all the unnecessary cruft that ties you down to specific implementations. Avoid the needless rules that bind you to specific strategies without actually helping you.

And if this much freedom causes you to wonder "How can my team stay organized?" then you should address those leadership issues and not try to use needless ceremony as a crutch for avoiding a sick internal political situation.


(Also, I offer a huge "Thank you" to Natalie Sidner for the tremendous editing she did on the rough draft of this post. To the extent that this article is readable, it is thanks to her. Any mistakes are entirely my fault, and I probably added them after she was done editing. If you need to hire a good editor, contact Natalie Sidner at "nataliesidner at gmail dot com".

Also, I thank Blanche Krubner for reviewing this work. As Mrs Krubner studied computer programming during the 1970s, I found it fascinating to get feedback from someone whose views of the discipline were shaped during a different era.)

Source