Get rid of all Dependency Injection

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at:, or follow me on Twitter.

This is part 6 of a 12 part series:

1.) Quincy’s Restaurant, a parable about concurrency

2.) Why I hate all articles about design patterns

3.) Clojure has mutable state

4.) Immutability changes everything

5.) Mutable iterators are the work of the Devil

6.) Get rid of all Dependency Injection

7.) Sick politics is the driving force of useless ceremony

8.) Functional programming is not the same as static data-type checking


9.) Inheritance has nothing to do with objects

10.) Is there a syntax for immutability?

11.) Immutability enables concurrency

12.) Quincy’s Restaurant, the game

When classes do everything, they will do everything badly

In his famous presentation on the simplicity of Design Patterns in langauges with higher level abstractions, Peter Norvig offers this:

We’ve already looked at some of the difficulties that arise when namespaces are more than mere organizational tools. What happens when you give a namespace state and force the namespace to take responsibility for initiating its own state (that is, an object)? Almost instantly, you realize that you have made a colossal mistake. How to avoid this mistake? You have 2 options:

1.) Use a language in which namespaces don’t have state

2.) Use a Dependency Injection system which shifts the responsibility of initiating the namespace to a system out of your code.

If you pick #2, you are stepping into a world of vast complexity.

Maybe it’s time we admit that binding state and behavior was a bad idea?

It is impossible to do large-scale, real-world Object Oriented Programming without some kind of Dependency Injection system. But that also implies the inverse: any attack on Dependency Injection is also an attack on large-scale, real-world Object Oriented Programming.

Why is Object Oriented Programming bad? Exactly for the reason it is suppose to be good: it binds together state and behavior. The original conjecture that made Object Oriented Programming so exciting was that we could finally achieve a safe mutating of state, by hiding our data and only allowing it to change based on the methods we added to the objects that held the state. Over the last 30 years, this conjecture has been thoroughly tested, and by now we have enough evidence to say that Object Oriented Programming is a failed experiment. While you can obviously get Object Oriented systems to work (millions of software developers do so everyday) we pay a high price in terms of complexity, and it turns out that there are simpler ways of achieving high levels of safety.

I am known for being critical of Object Oriented Programming (the Wikipedia page lists me as a critic) exactly because it burdens us with the conceptual complexity of things such as Dependency Injection. But before we talk about why Dependency Injection is terrible,let’s first go over all the reasons it is suppose to be wonderful.

Why is Dependency Injection the best thing in the whole history of the Universe?

To start with you should read Martin Fowler’s canonical essay, written in 2004, where he invented the phrase “Dependency Injection”.

I’m going to look at more recent work.

Michael Bevilacqua-Linn, in his wonderful book “Functional Programming Patterns, in Scala and Clojure“, offers both the standard arguments for Dependency Injection, and also why it is an unnecessary complication that we can get rid of with Functional programming. He offers this on page 128:

Objects are the primary unit of composition in the object-oriented world. Dependency Injection is about composing graphs of objects together. In its simplest form, all that’s involved in Dependency Injection is to inject an object’s dependencies through a constructor or setter.

For instance, the following class outlines a movie service that’s capable of returning a user’s favorite movies. It depends on a favorites service to pull back a list of favorite movies and a movie DAO to fetch details about individual movies:

package com.mblinn.mbfpp.oo.di;
public class MovieService {

  private MovieDao movieDao;
  private FavoritesService favoritesService;

  public MovieService(MovieDao movieDao, FavoritesService favoritesService) {
    this.movieDao = movieDao;
    this.favortiesService = favoritesService;

Here we’re using classic, constructor-based Dependency Injection. When it’s constructed, the MovieService class needs to have its dependencies passed in. This can be done manually, but it’s generally done using a dependency-injection framework.

Dependency Injection has several benefits. It makes it easy to change the implementation for a given dependency, which is especially handy for swapping out a real dependency with a stub in a test.

With appropriate container support, dependency injection can also make it easier to declaratively specify the overall shape of a system, as each component has its dependencies injected into it in a configuration file or in a bit of configuration code.

…In a full program, we’d then use a framework to wire up MovieService’s dependencies. We have quite a few ways to do this, ranging from XML configuration files to Java configuration classes to annotations that automatically wire dependencies in.

All of these share one common trait: they need an external framework to be effective. …Clojure offers options that have no such limitation.

One thing that strikes me when programmers sing the praises of Dependency Injection, is that their words suggest the opposite of what they say. The programmers believe they are writing praise, yet their words can be read as criticism. Let us consider Chad Myers reflections on the advice he gleaned from the book “Design Patterns: Elements of Reusable Object-Oriented Software”

In the book, the author says:

[Manipulating objects solely in terms of their interface and not their implementation] so greatly reduces implementation dependencies between subsystems that it leads to the following principle of reusable object-oriented design:

Program to an interface, not an implementation.

Don’t declare variables to be instances of particular concrete classes. Instead, commit only to an interface defined by an abstract class.

This point is profound and if it isn’t already something you religiously practice, I suggest you do some more research on this topic. Coupling between types directly is the hardest, most pernicious form of coupling you can have and thus will cause considerable pain later.

Consider this code example:

  public string GetLastUsername()
    return new UserReportingService().GetLastUser().Name;

As you can see, our class is directly new()’ing up a UserReportingService. If UserReportingService changes, even slightly, so must our class. Changes become more difficult now and have wider-sweeping ramifications. We have now just made our design more brittle and therefore, costly to change. Our future selves will regret this decision. Put plainly, the “new” keyword (when used against non-framework/core-library types) is potentially one of the most dangerous and costly keywords in the entire language — almost as bad as “goto” (or “on error resume next” for the VB/VBScript veterans out there).

What, then, can a good developer do to avoid this? Extract an interface from UserReportingService (-> IUserReportingService) and couple to that. But we still have the problem that if my class can’t reference UserReportingService directly, where will the reference to IUserReportingService come from? Who will create it? And once its created, how will my object receive it? This last question is the basis for the Dependency Inversion principle. Typically, dependencies are injected through your class’ constructor or via setter methods (or properties in C#).

What is Object Oriented Programming suppose to give us in this situation? Mostly three things:

1.) contract enforcement

2.) data hiding

3.) ease of composition

But we do not need Object Oriented Programming to achieve these three things, and if we do use Object Oriented Programming then it inflicts on us a high cost in terms of ceremony, setup, and complexity. How bad is that cost? Myers continues:

It’s also the case that the act of creating (new()’ing) an object is actually a responsibility in and of itself. A responsibility that your object, which is focused on getting the username of the last user who accessed the system, should not be doing. This concept is known as the Single Responsibility Principle. It’s a subtle distinction, and we usually don’t think of the “new” keyword as a responsibility until you consider the ramifications of that simple keyword. What if UserReportingService itself has dependencies that need to be satisfied? Your class would have to satisfy them. What if there are special conditions that need to be met in order for UserReportingService to be instantiated properly (existing connection to the database/open transaction, access to the file system, etc). The direct use of UserReportingService could substantially impact the functioning of your class and therefore must be carefully used and designed. To restate, in order to use another class like UserReportingService, your class must be fully responsible and aware of the impacts of using that class.

Please note the irony that Myers is arguing in favor of Object Oriented Programming, though his words can easily be read as a criticism of Object Oriented Programming. He continues:

The Creational patterns are concerned with removing that responsibility and concern from your class and moving it to another class or system that is designed for and prepared to handle the complex dependencies and requirements of the classes in your system. This notion is very good and has served us well over the last 15 years. However, the Abstract Factory and Builder pattern implementations, to name two, became increasingly complicated and convoluted. Many started reaching the conclusion that, in a well-designed and interface-based object architecture, dealing with the creation and dependency chain management of all these types/classes/objects (for there will be many more in an interface-based architecture and that is OK), a tool was needed. People experimented with generating code for their factories and such, but that turned out not to be flexible enough.

In other words, the costs of this approach were so awful that even the proponents of Object Oriented Programming nowadays shrink away in horror. Object Oriented Programming was once seen as the silver bullet that was going to save the software industry. Nowadays we need a new silver bullet to save us from the Object Oriented Programming, and so, we are told (in the next paragraph), the Inversion of Control Container was invented. This is the new silver bullet that will save the old silver bullet.

The next 4 paragraphs are simply amazing, and not in a good way:

To combat the increasing complexity and burden of managing factories, builders, etc, the Inversion of Control Container was invented. It is, in a sense, an intelligent and flexible amalgamation of all the creational patterns. It is an Abstract Factory, a Builder, has Factory Methods, can manage Singleton instances, and provides Prototype capabilities. It turns out that even in small systems, you need all of these patterns in some measure or another. As people turned more and more of their designs over to interface-dependencies, dependency inversion and injection, and inversion of control, they rediscovered a new power that was there all along, but not as easy to pull off: composition.

By centralizing and managing your dependency graph as a first class part of your system, you can more easily implement all the other patterns such as Chain of Responsibility, Decorator, etc. In fact, you could implement many of these patterns with little to no code. Objects that had inverted their control over their dependencies could now benefit from that dependency graph being managed and composited via an external entity: the IoC container.

As the use of IoC Containers (also just known as ‘containers’) grew wider and deeper, new patterns of use emerged and a new world of flexibility in architecture and design was opened up. Composition which, before containers, was reserved for special occasions could now be used more often and to fuller effect. Indeed, in some circumstances, the container could implement the pattern for you!

Why is this important? Because composition is important. Composition is preferable to inheritance and should be your first route of reuse, NOT inheritance. I repeat, NOT inheritance. Many, certainly in the .NET space, will go straight for inheritance. This eventually leads to a dark place of many template methods (abstract/virtual methods on the base class) and large hierarchies of base classes (only made worse in a language that allows for multiple inheritance).

Again, we are doing all this work to get three things:

1.) contract enforcement

2.) data hiding

3.) ease of composition

I am sure we can all agree that these 3 things are greatly to be desired in a software project. So then the question arises, is there an easier way to achieve these three things?

If you’ve been doing programming for long then you have probably heard the quote from Alan Perlis:

It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures.

Why is that? What simplicity do we get by working directly on data structures?

Recently, Sean Corfield offered this beautiful example:

I see it as the difference between:

  {:name "Sean" :age 52}


	public class Person {
		private String name;
		private Long age;
		public Person( String name, Long age ) {
			setName( name );
			setAge( age );
		public String getName() { return; }
		public Long getAge() { return this.age; }
		public void setName( String name ) { = name; }
		public void setAge( Long age ) { this.age = age; }

Even if you made Person a value class by removing the setters, that’s a lot of code obscuring a simple data structure. And with the data structure, you can use any Clojure sequence function or hash map function, whereas with a Person type, you’re stuck with the API presented.

I’m sure we can all agree that this is simpler:

  {:name "Sean" :age 52}

And I assume it is obvious how this give us “ease of composition”: we can chain together as many functions as we like to transform this data. Ah, but wait, how do we get data hiding and contract enforcement? Well, we can still add contracts to the functions that transform our data. That is typically enough. In the rare cases where we really need data hiding, we can resort to some kind of Pseudo Object Oriented Pattern (I posted an example on Github), where we put a var in a namespace and mark it private, and then we add public functions which are the only ways to transform that private var. That approach still spares us the initial configuration problems with “new’ing” that Myers emphasized.

As I say on Github, a namespace is much simpler than a class:

1.) you don’t give parameters to a namespace

2.) you don’t instantiate a namespace

3.) but you can inspect them

Hell has many tortures

Up above I said that Object Oriented Programming was a failed experiment. That is a fairly bold claim. And yet, over and over again, during the last 10 or 15 years, we have seen some of the leading members of the tech community admit that, no matter how hard they tried, they could not get Object Oriented Programming to deliver on its promises. And, what I find especially interesting, over the last 5 years we’ve seen the emergence of microservices, and the complications that they bring to our projects, especially in regards to enforcing contracts and versioning our APIs. What’s fascinating is how much the research around microservices is finally giving us some of the breakthroughs that Object Oriented Programming was suppose to give us.

Consider this conversation from 2003, between Bill Venners and Anders Hejlsberg:

Bill Venners: To what extent is “DLL Hell” a failure of interface contracts to work adequately in practice? If everyone fully understands and adheres to the contract of the functions of a particular DLL, shouldn’t updating that DLL in theory not break any code?

Anders Hejlsberg: Hell has many tortures. One aspect of DLL hell is that you don’t adhere to your promised semantic contract. You do something different than what you did before, and therefore you break code. That’s actually probably not the biggest issue that we face. The true problem with DLL hell is that we don’t allow you to have multiple different versions of a particular DLL present on the machine. Once you upgrade the DLL you upgrade everybody, and that’s a mighty big hammer.

Bill Venners: But if the contract is followed, shouldn’t the most recent version work for all users of that DLL?

Anders Hejlsberg: In theory, yes. But any change is potentially a breaking change. Even a bug fix could break code if someone has relied on the bug. By the strictest definition you realize that you can do nothing once you’ve shipped.

Wow! So Anders Hejlsberg, hardly a slouch of a programmer, was having trouble getting Object Oriented Programming to live up to its promises!

But equally interesting, from the same interview, is how much this part of the conversation rehearses the conversation that nowadays occurs around microservices:

Bruce Eckel: Have you come up with any interesting thoughts about ways to enforce semantic contracts?

Anders Hejlsberg: I think the most promising ideas are existing and new developments in pre- and post-conditions, assertions, invariants, and so forth. Microsoft research has several ongoing projects that we continually evaluate . We have looked at some pretty concrete proposals. In the end we realized that—as is the case for almost any truly important feature—you can’t just do contract enforcement from purely a programming language perspective. You have to push it into the infrastructure, the CLR, the Common Language Specification, and therefore all the other programming languages. What good is putting invariants on an interface, if it is optional for the implementer of the interface to include the code of the invariant? So it’s really more at a type system level than at a programming language level, but that doesn’t necessarily mean that we’re not going to address it. We are not doing that in the next release, but it could become a first class feature in C# down the line.

Recently I’ve been reading Sam Newman’s excellent book, “Building Microservices” and some of what he writes about semantic versioning for APIs overlaps with the subjects that Hejlsberg was talking about. This is from page 64:

Wouldn’t it be great if as a client you could look just at the version number of a service and know if you can integrate with it? Semantic versioning is a specification that allows just that. With semantic versioning, each version number is the form MAJOR.MINOR.PATCH. When the MAJOR number increments, it means that backward incompatible changes have been made. When MINOR increments, new functionality has been added that should be backward compatible. Finally, a change to PATCH states that bug fixes have been made to existing functionality.

To see how useful semantic versioning can be, let’s look at a simple use case. Our helpdesk application is built to work against version 1.2.0 of the customer service. If a new feature is added, causing the customer service to change to 1.3.0, our helpdesk application should see no change in behavior and shouldn’t be expected to make any changes. We couldn’t guarantee that we could work against version 1.1.0 of the customer service, though, as we may rely on functionality added in the 1.2.0 release. We could also expect to have to make changees to our application if a new 2.0.0 release of the customer services comes out.

You may decide to have a semantic version for the service, or even for an individual endpoint on a service if you are coexisting them as detailed in the next section.

This versioning scheme allows us to pack a lot of information and expectations into just three fields. The full specification outlines in very simple terms the expectations clients can have of changes to these numbers, and can simplify the process of communicating about whether changes should impact consumers. Unfortunately, I haven’t seen this approach used enough in the context of distributed systems.

Coexist different endpoints

If we’ve done all we can to avoid introducing a breaking interface change, our next job is to limit the impact [of such a change]. The thing we want to avoid is forcing consumers to upgrade in lock-step with us, as we always want to maintain the ability to release microservices independently of each other. One approach I have used successfully is to coexist both the old and the new interfaces in the same running service. So if we want to release a breaking change, we deploy a new version of the service that exposes the old and the new versions of the endpoint.

This allows us to get the new microservice out as soon as possible, along with the new interface, but gives time for consumers to move over. Once all of the consumers are no longer using the old endpoint, you can remove it a long with any associated code.

So for instance, SalesForce offers all of its APIs under a version system. Last year, when I began writing code to interact with Salesforce, I stupidly set the code to always use the latest version of SalesForce. At that time, the latest version was 34. And then one day I got to the office and my app no longer worked. And I found out that during the night, SalesForce had rolled out version 35, which had breaking changes. So I hardcoded my app to always use version 34. This put us in charge of when we would upgrade to SalesForce’s newer API versions.

Also see, Is it true that no application written on SalesForce has ever been rendered obsolete by an update (every app every written for SalesForce is still operable)? How does SF ensure such complete backward compatibility?

How should we organize our code?

So, are Functional programmers magically freed from the need to organize their code? Of course not. But the question “How should I organize my code?” is much easier to answer than the question “How should I organize and initiate my code?” and this latter question is the one you are forced to answer when your state is forced into objects. I think this is what Joe Armstrong (creator of Erlang) meant when he said:

I think the lack of reusability comes in object-oriented languages, not functional languages. Because the problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle. If you have referentially transparent code, if you have pure functions — all the data comes in its input arguments and everything goes out and leave no state behind — it’s incredibly reusable.

A programmer in a Functional language still has to answer the question “Which namespace or module should I put this function in?”. But that is an easy question, since the function is not tied to any particular variables. We are not mixing behavior and state. Object Oriented Programmers often struggle with the whole organizational scenario, which tends to manifest like this:

“I have a Movie class and a User class and a Favorites class. I am creating a function to register a Movie as a Favorite of a User. Should the function go in the Movie class, so it can have access to the Movie variables? Or should it go in the User class, so it can have access to the User variables? Or perhaps it should go in the Favorites class, which could have a variable for matching a User to a Movie? But how can Favorite get access to the data that it needs? Perhaps each instance of a User should hold a reference to a Favorite Singleton? Or should the Movie class hold the reference to the Favorite singleton? How shall the Movie and User talk to each other? Does one belong to the other?”

It is a difficult series of questions exactly because of the data hiding that the objects engage in. And the objects engage in data hiding because that is suppose to keep our data safe. If the objects didn’t hold state, most of these questions vanish.

In Clojure, you might have three namespaces, Movie, User and Favorite. These would simply be convenient places to put related functions. You might store the movie data, user data and favorite data in a single var, an atom, which perhaps is only updated by one or two functions. Those few functions can enforce whatever contract you want to enforce, and thus offer data integrity. That atom could be shared among the Movie, User and Favorite namespaces.

The big surprise of Functional programming is that this works, and it can be safe. If you have absorbed the last 30 years of Object Oriented propaganda, then you know ONLY OBJECTS CAN KEEP US SAFE! But, amazingly, this propaganda turns out to be wrong.You can have a global var, and if there are only one or two functions that update it, and you enforce contracts on those functions, then you get the safety that you need. So, at the REPL in Emacs, this might look like:

Or perhaps you need a little bit more safety? Then you can adopt what I call the Pseudo Object Oriented style (a basic example), and make the var itself private, so that it can only be updated by the public functions in that namespace:

(ns entertainment.tracking)

(def :^private media (atom {}))
(def :^private favorites (atom {}))

(defn add-to-media [group id value]
	{:pre [
	       (keyword? group)
	       (or (= group :movie) (= group :user)) 
	       (number? id)
	       (string? value)
	      (swap! media
		     (fn [previous-media]
			 (assoc-in previous-media [group id] value))))

(defn add-to-favorites [user-id movie-id]
	{:pre [
	       (number? user-id)
	       (number? movie-id)
	(swap! favorites 
	       (fn [previous-favorites]
		   (assoc previous-favorites user-id (conj (get previous-favorites user-id []) movie-id)))))

So in this case, the two vars are private, and can only be updated by the two functions that are public. And we can put any kind of contract on these functions, and thus ensure our data integrity. In this case, I’m using pre-assertions, which takes us a very small step down the road to Gradual Typing, and will give us a runtime error if I violate the contract, as you can see in the images above, and I’ll repeat the error here to be sure you saw it:

user> (add-to-media "movie" 1 "Citizen Kane")
AssertionError Assert failed: (keyword? group)  user/add-to-media (NO_SOURCE_FILE:1)

But as this code becomes more mature, you might want to move to use something like Typed Clojure, to get warnings at compile time, rather than runtime. (And again, I like Gradual Typing because I am a big believer in dynamic data-typing, when a project is young — I only believe in static data-typing once a project is mature.)

Suppose you have a namespace for Movie and a namespace for User. Suppose you promiscuously share the namespace “entertainment.tracking” in all of the other namespaces. Have we lost safety? No, because we can enforce any contract we need to, on the functions that update our state. Is “entertainment.tracking” the same as a Singleton instance? Not really, because you can call “add-to-media” or “add-to-favorites” in different threads, simultaneously, many times, because they are just functions, they are not attached to a Singleton instance. Is the task of wiring our app together made easier? Yes, for sure, because the question “How do I load my movie and user data?” does not have to be part of the process of instantiating “entertainment.tracking”, and that’s because we never instantiate “entertainment.tracking”, because it is not a class and so will never be an object!

Are we free of all questions about organizing our code? No, but the questions are simpler because we can be more flexible. We are not binding state and behavior, so we have more choices about how we organize things.

Are we free of all configuration issues? No. We will look at configuration in a moment. First, let’s look at some of the alternatives to Dependency Injection.

How do we avoid Dependency Injection?

Going back to Michael Bevilacqua-Linn, having listed the benefits of Dependency Injection, he follows up with an example of function injection. The example is somewhat awkward, because he wants to demonstrate the pattern, and yet almost no one writes codes like this in Clojure, as he later acknowledges:

There’s less of a need for a Dependency Injection-like pattern when programming in a more functional style. Functional programming naturally involves composing functions, as we’ve seen in patterns like Pattern 16, Function Builder, on page 167. Since this involves composing functions much as Dependency Injection composes classes, we get some of the benefits for free from functional composition.

However, simple functional composition doesn’t solve all of the problems Dependency Injection does. This especially true in Scala because it’s a hybrid language, and larger bodies of code are generally organized into objects.

…The unit of injection in Clojure is the function, since Clojure is not object oriented. For the most part, this means that the problems we solve with Dependency Injection does need a bit of special treatment in Clojure. To stub out functions for testing purposes, we can use a macro named with-redefs, which allows us to temporarily replace a function with a stub.

Clojure doesn’t have a direct a direct analog to Dependency Injection. Instead, we pass functions directly into other functions as needed. For instance, here we declare our “get-movie” and “get-favorite-videos” functions:

  (defn get-movie [movie-id]
    {:id "42" :title "A Movie"})

  (defn get-favorite-videos [user-id]
    [{:id "1"}])

Here we pass them into get-favorite-decorated-videos where they’re used:

  (defn get-favorite-decorated-videos [user-id get-movie get-favorite-videos]
      {:movie (get-movie (:id video))
       :video video}))

Another possibility is to use Pattern 16, Function Builder, on page 167, to package up the dependent functions in a closure.

However, in Clojure, we generally only do this sort of direct injection when we want the user of the function to have control over the passed-in dependencies. We tend not to need it to define the overall shape of our programs.

Instead, programs in Clojure and other Lisps are generally organized as a series of layered, domain-specific languages.

Like I said, the example is awkward, because it is rare to write Clojure code like this. A pattern that is common in Clojure is having a Factory function that generates customized closures for some particular purpose. Michael Bevilacqua-Linn offers an example and I’ll offer an excerpt here, I’ll also offer my own example.

By the way, “get-favorite-decorated-videos” is probably a bad name for a function. It breaks the rules that Stuart Sierra gave for naming Clojure functions. Since “get-favorite-decorated-videos” is a pure function, it should be named after the data-structure that it returns. That is a naming convention that helps support “referential transparency”, that is, the idea that the function is interchangeable with the value returned, and therefore might as well have the name you might give to the variable you would assign the value to. I would go with “users-favorite-media” but some would regard that as verbose, and I can imagine some would simply go with “favorite-media”.

Michael Bevilacqua-Linn offers this example of the Function Builder Pattern:

Sometimes we’ve got a function that performs a useful action, and we need a function that performs some other, related action. We might have a “vowel?” predicate that returns “true” when a vowel is passed in and now we need a “consonant?” that does the same for consonants.

Other times, we’ve got some data that we need to turn into an action. We might have a discount percentage and need a function that can apply that discount to a set of items.

With Function Builder, we write a function that takes our data or function (though, as we’ve seen, the distinction between functions and data is blurry) and uses it to create a new function.

To use Function Builder, we write a higher-order function that returns a function. The Function Builder implementation encodes some pattern we’ve discovered.

For example, to create a “consonant?” predicate from a “vowel?” predicate, we create a new function that calls “vowel?” and negates the result. To create “odd?” from “even?”, we create a function that calls “even?” and negates the result.

There is an obvious pattern here. We can encode it with a Function Builder implementation named “negate”. The “negate” function takes in a function and returns a new one that calls the passed-in function and negates the result.

Functions from Static Data

One way to use the Function Builder is to create functions out of static data. This lets us take a bit of data – a noun – and turn it into an action – a verb. Let’s look at a function that takes a percentage and creates a function that calculates discounted prices based on those percentages.

(defn discount [percentage]
  {:pre (and (>= percentage 0) (<= percentage 100))}
  (fn [price] (- price (* price percentage 0.01))))

We can create a discounted price and call it as an anonymous function:

=> ((discount 50 200))

To be clear, this returns a function:

(discount 50 200)

Wrapping this in another set of parens forces the function to be called instantly:

((discount 50 200))

One often sees this in tutorials and examples, but it is a bit rare in real world projects (though not unknown).

Finally, Michael Bevilacqua-Linn explains:

And if we want to name our discount function to use it multiple times, we can do so:

=> (def twenty-five-percent-off (discount 25))

=> (apply + (map twenty-five-percent-off [100.0 25.0 50.0 25.0]))

=> (apply + (map twenty-five-percent-off [75.0 25.0]))

Veering slightly from the question of Dependency Injection, I'll offer another example of a Factory function from my code.

I have been working on a little side project that attempts to use the Amazon Echo to get data out of SalesForce. One issue that comes up, what happens if a request comes in from the user, where the user asks about a particular account, but I can not find that Account in SalesForce? I need two paths in the code, one if I find the Account, and one if I don't. But I can't know if the Account will be found until after I query SalesForce, and that happens after the "Intent" is read from the Amazon input (there is a separate debate to be had about whether this is the right organization of the code, and some on the Amazon developer forums have advocated for a MVC approach, which I'm still considering, so I offer this only as an example of a Factory function, and not as an example of the right way to deal with the Echo.)

(defn get-what-chance-do-we-have [request intent company-name]
  (fn [account-id this-users-salesforce-credentials]
    (let [opportunities (intent/discern {:intent "get-opportunities-for-this-account-id"
                                         :salesforce-credentials this-users-salesforce-credentials}
          opportunities-report (make-opportunities-report opportunities)        
          likelihood (calculate-likelihood  opportunities)
          outputSpeech-text  (str " For " company-name " the average probability of all open opportunities is " likelihood " percent  ")
          response-in-amazon-format (make-response company-name outputSpeech-text false)]
(defn no-account-id-response [company-name]
  (make-response "No company specified" (str "No " company-name " found." " Say name again.") false))

(defn account-specific-response [request company-name intent-fn]
  (let [this-users-salesforce-credentials (salesforce-credentials request)
        ai (account-id this-users-salesforce-credentials company-name)]
    (if ai
      (intent-fn ai this-users-salesforce-credentials)
      (no-account-id-response company-name))))
(defn chat [request]
  (clojure.pprint/pprint request)
   (check-amazon-signature request)
   (check-amazon-timestamp request)
   (catch Object o (hostile-exception request &throw-context)))

  (if (clojure.string/blank? (:accessToken (:user (:session (:body request)))))
     (let [intent (get-in request [:body :request :intent] {:name "default"})
           intent-name (:name intent)
           company-name (get-company-name-from-intent intent)
           _ (println " the company-name is " company-name)
           _ (println "the intent-name: " intent-name)
           response-in-amazon-format (cond
                                       (= intent-name "GetWhatChanceDoWeHave") (account-specific-response request company-name (get-what-chance-do-we-have request intent company-name))
                                       (= intent-name "GetWhatIsTheValueOfOpenOpportunities") (account-specific-response request company-name (get-what-is-the-value-of-open-opportunities request intent company-name))
                                       (= intent-name "GetCompany") (account-specific-response request company-name (get-company request intent company-name))
                                       (= intent-name "WhoOwnsTheAccount") (account-specific-response request company-name (who-owns-the-account request intent company-name))
                                       (= intent-name "WhoOwnsTheOpportunity") (account-specific-response request company-name (who-owns-the-opportunity request intent company-name))
                                       (= intent-name "AMAZON.HelpIntent") (get-help-response)
                                       (= intent-name "AMAZON.StopIntent") (get-stop-response)
                                       :else (get-default-response))]
       {:status 200
        :headers {"Content-Type" "application/json"}
        :body  (cheshire/generate-string response-in-amazon-format)})
     (catch Object o
       (friendly-exception request &throw-context)))))

So assume that Amazon has listened to the audio that it received from someone's Echo device, and Amazon has decided this request is going to my app, and the Intent is GetWhatChanceDoWeHave, so we arrive at this line:

  (= intent-name "GetWhatChanceDoWeHave") (account-specific-response request company-name (get-what-chance-do-we-have request intent company-name))

This means "get-what-chance-do-we-have" is called and it returns a function which is passed as a parameter to "account-specific-response". The signature for "account-specific-response" is:

account-specific-response [request company-name intent-fn]

The result from "get-what-chance-do-we-have" is the function that here becomes "intent-fn". If "account-id" was able to find the requested Account in SalesForce, then the "intent-fn" function is called. This anonymous closure has this signature:

(fn [account-id this-users-salesforce-credentials]

so this should only be called if the account-id was, in fact, found. Returning a function from "get-what-chance-do-we-have" is useful because we want to defer execution until we get the response from SalesForce -- only then do we know if we have found the requested Account.

Does this take us away from the topic of Dependency Injection? Somewhat, but the argument I'm making is that wiring up dependencies is simpler in Functional languages. And yes, Factory functions can play a big role in organizing those dependencies, since they allow you to defer execution until such time as you have needed dependencies. This is not the only technique, but it is a good one, and importantly, it is a simple one. And here is a really important point: you can do all this entirely within your app, without having to introduce an outside Dependency Injection system. To me, that is a big win. When you introduce an outside Dependency Injection system you introduce a huge amount of complexity into your app.

Dependency Injection Is Amazing redux

Many, perhaps most, Functional programmers feel that Dependency Injection is important and can not be avoided. More so, many feel this justifies the use of an Object Oriented style. Some questions come up repeatedly:

Is it acceptable to stuff configuration information into a global var?

Is it acceptable to write code that relies on global configuration? Doesn't that violate all kinds of rules about self-contained functions?

Since any large project must have dependencies, should we try to make those dependencies as obvious as possible?

Even if we wish to avoid mutability as much as possible, initiating our apps at startup time is a big exception, yes?

In the world of Clojure, the most popular approach to Dependency Injection is the "Component" library, created by Stuart Sierra. The argument for the system is given on that page:

This is primarily a design pattern with a few helper functions. It can be seen as a style of dependency injection using immutable data structures.

Large applications often consist of many stateful processes which must be started and stopped in a particular order. The component model makes those relationships explicit and declarative, instead of implicit in imperative code.

Components provide some basic guidance for structuring a Clojure application, with boundaries between different parts of a system. Components offer some encapsulation, in the sense of grouping together related entities. Each component receives references only to the things it needs, avoiding unnecessary shared state. Instead of reaching through multiple levels of nested maps, a component can have everything it needs at most one map lookup away.

Instead of having mutable state (atoms, refs, etc.) scattered throughout different namespaces, all the stateful parts of an application can be gathered together. In some cases, using components may eliminate the need for mutable references altogether, for example to store the "current" connection to a resource such as a database. At the same time, having all state reachable via a single "system" object makes it easy to reach in and inspect any part of the application from the REPL.

Alex Miller also developed a system for Dependency Injection in Clojure. He talks about how gross it can look to try to pass around every parameter that a function might need:

Say you have some Clojure functions that look like this:

(defn foo [config queue db-conn arg1]
...use config, queue, db-conn... )

(defn bar [config queue db-conn arg1 arg2]
...use config, queue, db-conn... )

Seeing all those config options and system resources on every function just looks gross right?

Much of the debate comes down the conflict between:

1.) state should be kept in some global var

2.) state should be passed to functions via their arguments

Alex Miller's point comes up in response to developers who try to be militant about passing all necessary information as argument parameters, rather than depending on global vars. But passing all arguments through functions leads to this issue:

function1 -> function2 -> function3 -> function4 -> function5

Sometimes you get long chains of functions like this. What happens if, in this chain, the only function that needs access to the database is function5? That means you need to pass the database connection to function1 and function2 and function3 and function4 just so it can be given to function5. Some developers feel this violates the spirit of functionalism, since you have 4 functions that are taking an argument but not using it.

I personally favor this approach. I'm happy to pass a lot of arguments to functions, if it allows me to decrease my dependence on global vars. And I don't think it is a violation of the spirit of Functionalism to pass an argument to a function even though all it does is pass it to another function. To feel otherwise feels too strict to me.

As Alex Miller points out, the attempt to create a global dependency manager has its own set of worries:

If you want your dependency holder to be stateful, you can put refs in there, however I would not recommend starting that way. You have then created the opportunity for all of your functions to be stateful naughty impure tricksy functions in a new way. Bad enough they’re talking to the external world in the first place!

Somewhere higher up, I’d probably want a SystemState record, that saved the SystemDependencies record inside a ref:

(defrecord SystemState [
deps-ref ;; ref -> SystemDependencies

Calls into the subsystem would then extract the SystemDependencies from the SystemState prior to the call.

Depending on your ref granularity, you might extract from many stateful refs (in a doref block of course for a consistent view) to construct the SystemDependencies record.

...One consequence of creating these catch-all containers of dependencies is that it is then easy to be sloppy and just use that everywhere and thus imply that all your functions use all your dependencies. Depending on the scope of your dependency context record and function protocols, this may be a useful simplifying assumption. Or it could just as easily obscure the dependencies used by every function in your system. Creating the dependency records at a subsystem level, and function protocols at the namespace level balances concerns ok.

I can not find the quote right now, but I know that Stuart Sierra responded to this:

"Or it could just as easily obscure the dependencies used by every function in your system."

For Stuart Sierra, this was the biggest reason to use something like his Component system -- if there have to be dependencies, then we will want to make them obvious. This is why he uses Record and Protocols, to formally specify the expectations he needs to put on his dependences. This can be seen as an Object Oriented style.

Our conversations are complicated by the lack of agreed upon terminology

On the Clojure group on the conversations about Dependency Injection and Object Oriented strategies are complicated by the fact that there is no agreement about the terminology. And so you end up with conversations such as "stuartsierra/component is oop, can clojure namespace self init to solve the dependencies?" where several of the posts are simply about definitions.

Xiangtao Zhou wrote:

Constructing simple clojure project is trivial, just make functions. if the project grows large, with more datasources, message queue, and other storages, dependencies problem is on the table.

One solution is stuartsierra/component, using system to configure dependencies graph, make component and dependencies resolution separate.

If we make namespace must run with code block that init the namespace, like the "start" method in component, is this a good way to solve the dependencies?

To which Atamert Ölçgen responded:

How is stuartsierra/component OOP when it is building an immutable object graph? (Contrast that to Guava etc.)

To which Gary Verhaegen added:

OOP does not mean mutab... erh... anything, really, but there is an argument to be made for calling an immutable blob that carries data and the operations to act on it an "immutable object". If every "mutative" operation returns a modified copy of the object, you can have immutable OOP.

And Serzh Nechyporchuk added:

OOP always talks about object (mutable or immutable) in terms of polymorphism, encapsulation and inheritance. Components library misses only inheritance, which, obviously, the "killer feature" in OOP. So we could say that Components library that take good ideas from OOP.

Sometimes I think the worst thing about Object Oriented Programming is that no one can agree what constitutes Object Oriented Programming.

In yet another conversation about a dependency pattern, Colin Yates offers this bit of sad wisdom:

I think this discussion has run into the age-old problem of ‘subjective interpretation’, namely, multiple people all agreeing with a principle (i.e. complexity sucks) but then having _different_ interpretations of where that principle is challenged/upheld. Unfortunately in this industry there are hardly any unambiguous and objective statements because most of what we do is more artistic than engineering (zips up the flame suit now).

And, as ever, pragmatism throws it all out of the window :-)

One man’s complexity is another man’s simplicity and so on.

I'd like to think that my approach is Really Simple And 100% Pragmatic, but I know many will disagree.

Having clear Start, Stop and Main functions is a great pattern

In that conversation on Google Groups, I offered this response:

There are ways to handle dependencies without going down the Object Oriented Programming route that Stuart Sierra took. However, there are a lot of good ideas in Stuart Sierra's Component library, and to the extent that you can borrow those ideas, you end up with code that resembles "best practice" in the Clojure community.

For me, one "Big Idea" that I got from Stuart Sierra is that there should be a "start" method that can be called either from the REPL or from -main (for when you app runs as a daemon). The other "Big Idea" I got was that there should be functions for handling the lifecycle of all the components in your app, so you can easily start and stop and restart. So nowadays my "core.clj" tends to look like this:

(ns tma.core
   [tma.start :as start]
   [tma.stop :as stop]))

;; what you would call from the REPL to re-initate the app
(defn start []
    (catch Exception e (println e))))

(defn stop []
    (catch Exception e (println e))))

;; Enable command-line invocation
(defn -main [& args]
  (.addShutdownHook (Runtime/getRuntime)
      #(do (println "Tma is shutting down")

So I can call "start" and "stop" from the REPL, or, if the app is running as a daemon, "start" gets called by -main, and "stop" is registered as a shutDownHook.

"start" should trigger all the events that initiate the app, for instance, setting up connections to the database, if needed.

"stop" should clean up all the resources that the app might be using, for instance, close any database connections.

A well-designed Functional app can be thought of as a large scale version of the Recursive Function Pattern.

The "start" and "stop" is a great idea that I got from Stuart Sierra, however, I did not feel the need to go as far toward Object Oriented Programming as Stuart Sierra did. And some of things he has suggested as "best practice" really strike me as odd.

Consider his video "Clojure large scale patterns techniques".

Or his video "Components -- Just Enough Structure".

At one point he says that "Object Oriented code has the advantage that it is obvious where you configure your code."

As you can imagine, I find that surprising. Remember everything that Myer wrote about Dependency Injection? I would suggest the worst aspect of Object Oriented Programming is the complexity of the configuration.

How much abstraction is too much?

It's possible I will change my mind on this issue. I would like to find a more declarative approach than what I've been doing lately. My "start" and "stop" functions are full of procedural code. Listing the functions that I want to call in some external file is something I'd like to explore. I think I can develop something more declarative without going down the road that Stuart Sierra has gone down. We will see. I'm still researching this issue.

(I think there is a fascinating sociological question that haunts the Clojure community, regarding Object Oriented Programming. Many of the best Clojure developers spent 10 or 20 years doing Object Oriented Programming before they came to Clojure, so how much do they bring Object Oriented Programming with them because they feel its natural, and they feel it is natural because they spent so many years with the Object Oriented Programming style?)

I should mention, there are some very good Clojure programmers who feel that Stuart Sierra did not go far enough, and that we need even more indirection than what is offered by the Component library. See Chris Zheng's article on The Abstract Container Pattern. He feels this pattern allows us to enforce necessary contracts on our dependencies:

One way to look at design using abstract classes (or any language feature for that matter) is that it is a programming contract that is strictly enforced by the language itself. When we inherit functionality from an abstract class, we make a binding contract with the compiler. The contract stipulates that we have to implement a specific set of functions and we have to do it in a certain way. The compiler is meticulous in what it accepts as valid implementations of this contract. However, if we are able to satisfy its constraints, we can leverage all the functionality of a library built around the abstract implementation. The apache pool api for instance is a great example of how abstract classes are used to great effect. The string buffer example shows just how trivial it is to add pooling functionality for improved optimisation.

Although clojure generally does not restrict the programmer to form these contracts, we are still free to use this contract within our own code base. Although we don't have a compiler to check that our contract is conformant (though it could still be done through macros), we just have to be disciplined with our code and tests so that our code achieves the same effect of code reuse, flexibility that java forces us to use.

Zheng and I discussed this on Google Groups, and this was my response:

If I understood your original article, you were saying something that amounted to these 3 assertions:

1.) to future-proof our code against changes, and to avoid being verbose, we need polymorphism.

2.) we need some way to establish constraints (contracts) on that polymorphism, or else it will become difficult to understand, and it might be extended in dangerous and unintended ways.

3.) the Abstract Container gives us an excellent way to both achieve the polymorphism we are after, while also making clear what the limits on this polymorphism should be. The Abstract Container indicates to future developers how we expect them to extend this code.

I agree with #1 and #2 but I have my doubts about #3. While that Pattern can clearly work (it's been implemented a million times, so clearly it can be made to work) it strikes me as being more verbose than necessary. I would ask if there is a less verbose way of achieving the goals of #1 and #2?

Also, I was intrigued by this:

> In the case of iroh... if I had used strictly multimethods, I would have
> been very confused. If I had used strictly protocols... well I couldn't for
> a number of reasons.. but if I did, I would have been even more
> confused because of the number of subconditions that I had to implement.

This made me curious about iroh, so I went and looked here:

and I notice this:

(defn multi-element [m v]
  (element {:tag :multi
            :name (get-name v)
            :array v
            :lookup m
            :cache (atom {})}))

(defmethod to-element clojure.lang.APersistentMap [m]
  (let [v (to-element-array m)]
    (multi-element m v)))

(defmethod to-element clojure.lang.APersistentVector [v]
  (let [m (reduce
           (fn [m ele]
             (assoc-in m (to-element-map-path ele) ele))
           {} v)]
    (multi-element m v)))

It strikes me that you could achieve a high level of polymorphism (satisfying #1) if multi-element was a function that was passed in as an argument to to-element. That would be flexible, although, to satisfy #2, you would want to establish some kind of contract to set some limits on that flexibility. There are 3 ways that I might do this:

a.) I might use run-time checks, such as pre-conditions, on the arguments given to multi-element.

b.) I might use something like Typed Clojure to have compile time warnings regarding the arguments given to multi-element.

c.) I might hard-code multi-element, as you have done, but the map that you've hard-coded could become a Record that I pass into multi-element as an argument, thus making multi-element more polymorphic.

These approaches let me achieve #1 and #2 without the complexity of the Abstract Container Pattern. Have I missed something?

He replied that he would think about this issue more. I am not sure if he later posted a reply (if I find a reply I will link to it).

Again, I see the need for contracts that give us safety, and I understand that dependencies are a difficult issue. But I want us to deal with those issues in the simplest way possible.

In terms of handling things such as database connections, a style that I favor is what I guess you could the Functional Worker Pattern, where you have workers that are launched at startup, and they take full responsibility for configuring themselves (including equipping themselves with database connections, if needed) and also for later shutting themselves down. I'll post an example of this code in "Immutability enables concurrency". However, I admit, these workers (which are functions that have an infinite loop in them, so they run continuously while the app is running) are full of imperative code. I do realize that there are some attractions to a more declarative approach. Workers full of imperative code is the price I have so far been willing to pay to avoid introducing all the complexities of full Dependency Injection systems.

Post external references

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7!searchin/clojure/corfield%7Csort:date/clojure/0-j-cIJqo-A/z30jcbmqEwAJ
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16!searchin/clojure/dependency$20injection|sort:date/clojure/_qyF2S2AjG8/7q4kh3fGp8oJ
  17. 17!searchin/clojure/dependency$20injection|sort:date/clojure/7Q7QvlSUGL4/kVAupuT0BQAJ
  18. 18
  19. 19
  20. 20
  21. 21!searchin/clojure/dependency$20injection|sort:date/clojure/IpO-kHBO40Q/rE6HzxZslXYJ