July 20th, 2016
(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: firstname.lastname@example.org
Much like you’d chuck memcached on each of your web servers and access them in a ring, Darner can occupy a small niche on each box in your fleet’s resources. Tens of MB of RAM and negligible CPU opens up hundreds of gigabytes of queue spool per node. As queue size grows, memory usage remains constant.
Contrast this with Redis, which is speedy but limited in queue size to what will fit in the resident memory of the process.
RabbitMQ can page parts of its queue out of process, but some parts of its resident memory usage grow linearly with queue size no matter what, and as RabbitMQ falls behind its backlog of what to page to disk, it throttles clients and ultimately comes to a halt.
Besides performance, the big difference between RabbitMQ and Darner is the same as the difference between RabbitMQ and Kestrel (on which Darner is based): Kestrel/Darner’s protocol is much simpler (it’s just memcache), and installing and configuring Kestrel/Darner is much simpler, particularly for a cluster.