March 24th, 2014
(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: firstname.lastname@example.org
To reduce lock contention, we decided to run multiple MongoDB instances on one machine and create more granular databases in each instance. Basically data is stored in different instances based on its usage and in every MongoDB instance one database is created for each partner.
Some people hate the fact that MongoDB forces you to do more in your own app, but I prefer designing with those constraints in mind.
This has similarities to the design I’ve been working towards with TailorMadeAnswers.com:
The important thing about MongoDB is setting the read and write concerns to levels that work for your needs:
To achieve the throughput goal as well as to satisfy the data consistency/durability requirements, we also carefully set the appropriate write concerns and read preferences for all modules:
For modules that take very heavy write loads where minor data loss is acceptable, we set the write concern to errors ignored (i.e., fire-and-forget) and created comprehensive monitoring support to make sure the data flows in as expected.
For modules that only take heavy read load where temporary data inconsistency can be accepted, we set the read preference to secondaryPreferred and created the monitoring support to watch the replication lag closely.
For backend processes that rely on read-your-writes consistency, we set the write concern to acknowledged and the read preference to primary.
For modules that processes critical editorial control data sent from partners, the journaled write concern is used.
The rule of thumb here is that you have to balance data consistency/durability and performance for the use cases and the requirements you are dealing with. It’s worth noting that to help its users achieve higher levels of data consistency in general, MongoDB has changed the default write concern for all client drivers from fire-and-forget to acknowledged since November 2012.