Once again, the shift to “smart services, dumb pipes”

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com

Yesterday I linked to the article over at Martin Fowler’s website where he wrote about the shift away from complex routing frameworks, towards a system of “smart services, dumb pipes”. Here is one more data point:

At Digg our SOA consisted of many Python backend services communicating with each other as well as being used by our PHP frontend servers and Tornado API servers. They used Apache Thrift for defining the interfaces, clients and as the underlying protocol.

…Coming off the Digg SOA experience, we changed up a few things when it came to building the SOA at SocialCode. Backend services and APIs stayed in Python but rather than using Thrift we chose HTTP as the protocol, and we implemented the frontend products as JavaScript applications which communicated directly with the HTTP APIs, making the browser a native API client.

…One of the most important decisions we made from Digg to SocialCode was switching from the Thrift protocol to HTTP. This allowed the browser to be a native client which uses the APIs directly, but more than that it also made debugging and experimentation easier because most software developers are extremely comfortable with HTTP and HTTP servers, whereas relatively few are comfortable with Thrift. (This also extends to reusing their existing expertise for optimizing and debugging web servers and requests.)

We almost used Thrift at ShermansTravel.com, when we rebuilt the website. I read a great deal about it, but couldn’t see the advantage. What should be wanted, always, is a way to reduce the amount of code written. Thrift, it seemed to me, moved where we wrote the code, but did not necessarily shrink how much code we would have to write in the end.

I am curious about which form of SOA Will Larson is talking about, as he writes:

SOAs are not a panacea, rather they significantly complicate your architecture. Nor are they fundamentally a high performance architecture, they impede performance by requiring additional moving parts.

Martin Fowler mentioned the breathtaking complexity of some ontologies aimed at SOA. This is why some people are calling their new systems “microservices” because the label “SOA” means so many different things, some with negative connotations.

This is true:

If you shouldn’t immediately start with an SOA, then how do you move over to an SOA? The answer is “slowly and with great care.” You’ll either be moving over during a full rewrite–in which case, good luck!–or you’ll be incrementally transitioning from your existing monolithic architecture.

I’ve written before about “incrementally transitioning from your existing monolithic architecture”. To me, this is a huge advantage.

Will Larson comes out in favor of dumb pipes:

Hot on the heels of picking your SOA’s protocol is deciding the kind of API clients you’ll use for interacting with your APIs. Broadly, there is a tradeoff between using sophisticated clients which hide as much complexity as possible (buying ease of use at the loss of control or awareness) versus dumb clients which expose all the ugly details of the underlying API (control and awareness at expense of “normal case” implementation speed).

Personally, I’m a strong proponent of dumb API clients.

I believe smart clients encourage engineers to treat a SOA as a monolithic application, and that’s a leaky abstraction. For our current setup, each API request we perform–even within the same colocation–imposes about a 20ms cost simply for uWSGI to service the request (plus any additional time to perform logic or lookups within the API itself).

That’s a lot of overhead to abstract away.

Clearly, this is the current trend in the industry.

Back in 2006 David Heinemeier Hansson felt that the complex enterprise standards were coming to an end:

It feels like we’ve reached the last twenty minutes of A New Hope. The rebellion has the schematics for the deathstar and their army has charted the course for a final showdown. The battle is far from over, but you, the viewer, are no longer in doubt which way its going to turn out.

Yet to the commanders on deck, I’m sure it looks like they have nothing to worry about. The standardization of the standards is progressing full-speed ahead. We have committees to oversee the committees. So the mumblings of a small band of renegade hackers is hardly going to matter. Don’t they know that the battle station is soon to be fully operational?

It’s a common and recurrent theme, of course. I’m sure the pushers of EJB and Corba felt equally invincible long after their rebels had captured the blueprints for their destruction. Perhaps that’s just how a large sector of the IT industry has to work. There must be a new frontier of bottom-less complexity available to get lost in. Something that needs tooling, big consulting houses, five-year mission statements, and barriers of entry and exit.

This movement is extremely slow, but at this point it seems to have momentum.

Will Larson’s article repeats some of the points that the article Martin Fowler’s website made, though Larson uses a different language to describe similar ideas:

If your organiation manages system architecture more through voluntary collaboration and evangelism than fiat, then the easiest way to prevent service adoption is creating a service which unifies a capability which several teams are actively implementing.

On first reflection, this seems obviously wrong: of course you should avoid writing the same component twice.

In practice though, if multiple teams are already implementing similar functionality and are commited to a schedule, then it’s probably too late to try to unify implementations without forcing one or both teams to discard their project’s schedule or design. After the various implementations are stable, then it’s a good time to use those as concrete requirements for building a shared service.

File this one under “getting to the destination faster by avoiding fruitless debate.”