Can event logs, as a software architecture, really work?

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com

This is an interesting bit of criticism of this architecture, an architecture which certainly has gotten a lot of attention over the last 4 years:

I have worked on, or cleaned up, 4 different CQRS/ES projects. They have all failed. Each time the people leading the project and championing the architecture were smart, capable, technically adept folks, but they couldn’t make it work.

There’s more than one flavor of this particular arch, but Event Sourcing in general is simply not very useful for most projects. I’m sure there are use cases where it shines, but I have a hard time thinking of any. Versioning events, projection, reporting, maintenance, administration, dealing with failures, debugging, etc etc are all more challenging than with a traditional approach.

Two of the projects I worked on used Event Store. That was one of the least production ready data stores I’ve encountered (the other being Datomic).

I see a lot of excitement about CQRS/ES every two years or so (since 2010) and I strongly believe it is the wrong choice for just about every application.

And also:

Wow, this succinctly sums up our experience. Fun to develop against, absolute nightmare to support in production.
The only place I’d recommend it these days are where the business views their state as an event stream, maybe finance/stocks. Not developer-forced-events like “customer address updated” or “user email changed”.
Even workflow systems I’ve dealt with, the business doesn’t view their state as an event stream. The state is where it is, how it got there is an interesting footnote.

I wonder if this will be like Object Oriented Programming, another design that was suppose to help and which in fact increased the number of problems faced.

One of the projects did use Kafka as the “event log”. There were stability issues with the version of Zookeeper that was used. From a writer/reader perspective Kafka was sufficiently performant when it was up. (The Zookeeper issue was eventually fixed as I recall, but by then the damage was done in terms of political capital spent and lost.)
The big issues didn’t really have that much to do with the persistent store of events. The bigger issue was the fact that as new features get added to your application, your event payloads change. Well, in order for projection (particularly re-projection in the case of an issue) to work, your code needs to know how to read and process all versions of every event. Of course, there are techniques like snapshotting to give you point in time “good states” so you can eventually deprecate some of these events, but thinking about it is challenging.
Additionally, most folks argue that the event store should be immutable. This is great until some kind of bad event gets in there. Now your code needs to know how read, and discard, this bad event forever (or until a snapshotted point in time).
Finally, projection is not the panacea the evangelists will have you believe. Inevitably there will syncing issues between the event store and the projected database/elasticsearch cluster/mongo instance whatever. And what do you do then? Re-project! But that is not easy :)

And this:

We use the strategy of versioned events. Event data is still immutable, but older events are upgraded to the latest structure at read time. This works reasonably well but it is not ideal.
Storing data as immutable events implies that all data ever generated by your application becomes available to future versions of your application. Writing an application that can handle all the forms of your data across time is obviously more complex but it’s a necessary consideration if you decide to go with event sourcing. Unfortunately, you cannot have all the benefits of available and useful unaltered historical data without also putting in the engineering effort to support it.

Source