It took me 10 years to write that 5 line app

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com, or follow me on Twitter.

There is the story where Picasso paints a quick sketch and wants a huge amount of money for it, and the art collector says “It only took you 15 minutes to make that sketch!” and Picasso says “I spent 60 years learning how to do that in 15 minutes.”

Maybe something like this applies when it comes to microservices. When I advocate for microservices, I bring 17 years of experience to the conversation. The first 7 of those years meant dealing with the Wild Wild West of web craziness, before there were many universally followed patterns, then several more years spent working with monoliths such as Ruby On Rails and Symfony, then several more years realizing which constellation of technologies best supports microservices. I don’t think you apply microservices to Ruby On Rails — if you try you will meet with unrelenting pain. But if adopt the right constellation of technologies, then microservices is the easier way to proceed, in nearly every respect.

I surprised at the level of misunderstanding that I run into around this issue:

Microservices, like nosql databases, and complex deployment systems (docker) are very important solutions to problems a very small percentage of the development community has.
It just so happens that the portion of the community is the one most looked up to by the rest of the community so a sort of cargo cult mentality forms around them.

A differentiator in your productivity as a non-huge-company could well be in not using these tools. There are exceptions, of course, where the problem does call for huge-company solutions, but they’re rarer than most people expect.

And this:

My company built from the ground up with micro-architecture and it is an unmitigated disaster. Totally unnecessary, mind-numbing problems unrelated to end-user features, unpredictability at every step, huge coordination tax, overly-complex deployments, borderline impossible to re-create, >50% energy devoted to “infrastructure”, dozens of repos, etc.

The whole thing could be trivially built as a monolith on Rails/Django/Express. But that’s not exciting.

John Sheehan offered this reply:

We were also built from the ground up with microservices and had the exact opposite experience. Faster shipping (more value to end users), more predictability (APIs designed/behaved similarly across functions despite polyglot tech), much less coordination overhead (deployed dozens of times per day with a < 10 dev team, pre-release backends well in advance of the user-facing parts), etc. We had to invest a lot in infrastructure, but that was worth it for many other reasons as well. Dozens of repos is annoying, but not for a technical reason (a lot of SaaS like Bugsnag and GitHub used to charge by project). The biggest downside is it makes shipping an on-prem version nearly impossible. The infrastructure and the software are so inextricably linked that it is not portable in the least bit.

Back in the summer of 2013 I wrote “An architecture of small apps” which details my attempt to win over the engineering team at Timeout.com to using microservices. In that post I quote actual emails and Yammer messages, so you get a sense of how deeply misunderstood I was.

The linked article by Sean Kelly is deeply confusing. I wasn’t able to figure this out at all:

The simple fact of the matter is that microservices, nor any approach for modeling a technical stack, are a requirement for writing cleaner or more maintainable code. It is true that since there are less pieces involved, your ability to write lazy or poorly thought out code decreases, however this is like saying you can solve crime by removing desirable items from store fronts. You haven’t fixed the problem, you’ve simply removed many of your options.

First of all, it is absolutely valid for a store to keep its most valuable items away from the store front. If you go into a city that has high rates of crime, you will stores that follow this strategy. I can’t imagine why Sean Kelly thinks this is a bad idea. But also, I don’t understand “you’ve simply removed many of your options” — I am not sure how that applies to the store analogy, and I know for certain that it does not apply to microservices in any way. The opposite is true: a monolith enforces one particular network topology (all code on one machine, scaled horizontally by putting a copy of 100% of the code on another machine, and then another machine, and then another machine, etc) whereas microservices allows an infinite variety of network topologies — anything you can imagine. I’ve enjoyed meals at prix fixe restaurants, and I’ve enjoyed meals at buffet style restaurants, but I’ve never argued that the buffet removes options that I could better take advantage of at the prix fixe place. Such an assertion would make no sense.

About this:

Distributed Transactions are never easy

Exactly, so there is real benefit to building your architecture around Eventual Consistency. And you should do this from the beginning, rather than build code for two years and then try to retrofit it with Eventual Consistency. Handling all possible errors is easier if you try to model them before you even write 1 line of code.

Sean Kelly apparently feels that the hard work can be avoided. Implicitly, Kelly is indulging in the Eight Fallacies of Distributed Computing when he acts as if these are questions that can be avoided:

There is a lot of complexity wrapped in the problem of involving multiple remote services in a given request. Can you call them in parallel, or must they be done serially? Are you aware of all of the possible errors (both application and network level) that could arise at any point in the chain, and what that means for the request itself? Often, each of these distributed transactions needs its own approach for handling the failures that could arise, which can be a lot of work not only to understand the errors, but to determine how to handle and recover for each of them.

What Kelly forgets is that monoliths can still lose their connection to their database, or their CDN, or their cache layer, or their file server (for images or other downloads) or their file receiver (for image uploads, or any other kind of file).

Assume for a moment that you are allowed to use a monolith such Ruby On Rails or Symfony or Django. And assume that the project is so simple that you can keep everything on a single machine: the database, all files, everything. Are you suddenly free of the need to consider “all of the possible errors (both application and network level) that could arise at any point in the chain, and what that means for the request itself?” Sean Kelly does not appear to be stupid, so it is amazing that they might think this.

Let’s stick with this hypothetical for a moment longer. You’ve been asked to build a site where Ph.d students can hire some quick help for technical copy editing. That is, they hire people who can proofread the Ph.d thesis. You decided to build the whole thing in a monolith like Ruby On Rails, and to keep things simple, you decide to keep all of it on one computer. What could possibly go wrong?

1.) What if Rails loses its database connection?

2.) What if Rails consumes all of the allowed database connections (this happened to me at ShermansTravel.com)

3.) What if the server where the app lives runs out of disk space and so no one is able to upload their files any more?

4.) What if Nginx dies and restarts but fails to find the Rails app, or vice versa?

And why, why, why do people so often throw these questions at me, like accusations, when I suggest microservices, even though exactly the same questions apply to monoliths?

About this:

While I have no doubt folks who pivoted to microservices saw individual code paths isolated inside of those services speed up, understand that you’re also now adding the network in-between many of your calls. The network is never as fast as co-resident code calls, although often times it can be “fast enough”.

That sounds like an important point, since when we think of microservices, some people think of HTTP, which often costs as much 150 to 250 milliseconds “over the network”. But what does it mean to go “over the network”? If I am at home, and I contact a site such as Amazon.com, then I expect the HTTP call to cost me 150 milliseconds. But what if a service in a data center is contacting another service in the same data center? I spoke to Antonio Pellegrino who is CEO of LSQ.io. He says he experimented with this carefully and found network calls between services in AWS centers often just cost 15 milliseconds. That is fast. And yet, I know of connections that are even faster. If, instead of HTTP, your apps send raw bytes to each other via Redis, you will sometimes get even faster speeds.

I do think this is a problem:

While on the tin it might seem simpler to have a smaller team focused on one small piece of the puzzle, ultimately this can often lead to many other problems that dwarf the gains you might see from a having a smaller problem space to tackle. The biggest is simply that to do anything, you have to run an ever-increasing number of services to make even the smallest of changes. This means you have to invest time and effort into building and maintaining a simple way for engineers to run everything locally. Things like Docker can make this easier, but someone still needs to maintain these as things change.

The solution here is to move to a technology where building and running a lot of tasks is easier. This is exactly why I switched to Clojure when I switched to microservices. Trying to deal with Ruby On Rails, or Sinatra, becomes a nightmare when you are trying to use microservices. As I said above, I had to adopt a whole constellation of technologies when I switched to microservices, and that very much includes build tools and devops tools to keep long running daemons up (I often rely on Supervisord).

Is this actually more work than running Ruby On Rails? I’ve run into absolute hell trying to get all the paths setup to work with Ruby. I lost whole days trying to use rvm (Ruby Version Manager) to install the right version of Ruby, and then to use the correct path for the Gems. On my Mac, I’ve faced endless problems between the path that rvm looks at, versus the path where Homebrew installs “gem”. And then I’ve run into other problems getting “bundle exec” to obey the paths that I’ve set.

In a famous essay, Ryan Tomayko said “I like Unicorn because it’s Unix“. He points out that many Ruby commands are just thin wrappers around underlying Unix commands. Why reinvent the wheel? Why not just use the underlying Unix? You are running your app on a powerful Linux server, so why not take advantage of that, and use all the hundreds of incredible tools that Linux offers? This was an attitude that had a following for 20 years, from roughly 1990 to 2010. Many people are still loyal to the idea. But this idea will kill you if you try to use it with microservices. This idea has been dying for awhile, and the reality of the death is exactly why Docker was invented. We now rely on too many servers, and we can not customize them the way we want, because we need to spin them up and shut them down quickly. Back in 1990 the server was something that was lovingly cared for by a sysadmin. The underlying machine could be part of your app, the software on that server was something your app could depend on. Nowadays, in the cloud, we might kill 50 servers and then spin up 121 new ones. It is extremely important that your app contain all of its own dependencies. You can not trust some external server to give you your dependencies — not unless you have a way to enforce a contract on the provisioning of those dependencies, and a way to fail fast (and loudly) when the contract is broken.

Given the right setup (Clojure, uberjars, a run time script, a config file for ports) then the only thing that would keep someone from running all the software on their own machine is how much RAM they have on that machine. It’s also perfectly reasonable to have development environment where some services run in the cloud, and the local developer depends on those services in the cloud (that is, they don’t bother running every service locally).

Nowadays I use a microservices approach on almost everything. It’s easier and it’s faster. It offers more flexibility. Refactoring is easier, and enforcing boundaries is easier.

But you do have to have an understanding of the whole constellation of technologies that support that style of development.

Post external references

  1. 1
    https://news.ycombinator.com/item?id=12508655
  2. 2
    http://basho.com/posts/technical/microservices-please-dont/
  3. 3
    https://blog.fogcreek.com/eight-fallacies-of-distributed-computing-tech-talk/
  4. 4
    https://www.lsq.io/
  5. 5
    http://2ndscale.com/rtomayko/2009/unicorn-is-unix
Source