For the last 10,000 years, the banking industry has relied on eventual consistency

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at:, or follow me on Twitter.

Why do computer programmers feel smug offering the following example, when it is so clearly wrong?

Transactions, to a database, are important, because banks must keep track of money. Suppose a person were to move $100 from their Savings account to their Checking account. Suppose $100 is added to checking, and just then the electricity dies, and the computers die, before $100 can be subtracted from their Savings account. The person now has an extra $100, which they should not have.

This is the classic example which I have read many times. And advocates of ACID databases (which support transactions) offer this example to demonstrate their superior intellect. And yet, the banking industry has never worked this way. For thousands of years, transactions were handled by humans, in different locations, who would later reconcile the different deposits and withdrawals. The default has always been “eventual consistency”.

There is a political issue here, though I don’t entirely understand the origin of it. Those of us who rely on eventual consistency in our work are often forced to justify the choice, but those who rely on ACID databases never have to justify their choice. Why the asymmetry?

I was pleased to read this criticism of the banking example:

Your ATM transaction must go through so Availability is more important than consistency. If the ATM is down then you aren’t making money. If you can fudge the consistency and stay up and compensate for other mistakes (which are rare), you’ll make more money. That’s the space most enterprises find themselves so BASE is more popular than it used to be.

This is not a new problem for the financial industry. They’ve never had consistency because historically they’ve never had perfect communication. Instead, the financial industry depends on auditing. What accounts for the consistency of bank data is not the consistency of its databases but the fact that everything is written down twice and sorted out later using a permanent and unalterable record that is reconciled later. The idea of financial compensation for errors is an idea built deeply into the financial system.

During the Renaissance, when the modern banking system started to take shape, everything was partitioned. If letters, your data, are transported by horse or over ships, then it’s likely you data will have a very low consistency, yet they still had an amazingly rich and successful banking system. Transactions were unnecessary.

ATMs, for example, chose commutative operations like increment and decrement, so the order in which the operations are applied doesn’t matter. They are reorderable and can be made consistent later. If an ATM is disconnected from the network and when the partition eventually heals, the ATM sends sends a list of operations to the bank and the end balance will still be correct. The issue is obviously you might withdraw more money than you have so the end result might be consistent, but negative, which can’t be compensated for by asking for the money back, so instead, the bank will reward you with an overdraft penalty.

The hidden philosophy is that you are trying to bound and manage your risk, yet still have all operations available. In the ATM case this would be a limit on the maximum amount of money you can take out at any one time. It’s not that big of a risk. ATMs are profitable so the occasional loss is just the risk of doing business.

Consistency it turns out is not the Holy Grail. What trumps consistency is:

Risk Management

Post external references

  1. 1