September 11th, 2015
(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: email@example.com
It does feel like the world is moving closer to distributed IPC. We are not there, but the push toward microservices seems a step in that direction. The warnings of the past feel less relevant now.
When I wrote Patterns of Enterprise Application Architecture, I coined what I called the First Law of Distributed Object Design: “don’t distribute your objects”. In recent months there’s been a lot of interest in microservices, which has led a few people to ask whether microservices are in contravention to this law, and if so why I am in favor of them?
It’s important to note that in this first law statement, I use the phrase “distributed objects”. This reflects an idea that was rather in vogue in the late 90′s early 00′s but since has (rightly) fallen out of favor. The idea of distributed objects is that you could design objects and choose to use these same objects either in-process or remote, where remote might mean another process in the same machine, or on a different machine. Clever middleware, such as DCOM or a CORBA implementation, would handle the in-proces/remote distinction so your system could be written and you could break it up into processes independently of how the application was designed.
My objection to the notion of distributed objects was although you can encapsulate many things behind object boundaries, you can’t encapsulate the remote/in-process distinction. An in-process function call is fast and always succeeds (in that any exceptions are due to the application, not due to the mere fact of making the call). Remote calls, however, are orders of magnitude slower, and there’s always a chance that the call will fail due to a failure in the remote process or the connection.