To start with, tran­sient query surges are no longer a prob­lem?

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com, or follow me on Twitter.

Tim Bray has an interesting article about the future of REST. However, I fail to understand this sentence:

To start with, tran­sient query surges are no longer a prob­lem

Why? The external world can still send unexpected surges of traffic, yes?

Anyway, this is worth reading:

Post-REST: Mes­sag­ing and Event­ing · This ap­proach is all over, and I mean all over, the cloud in­fras­truc­ture that I work on. The idea is you get a re­quest, you val­i­date it, maybe you do some com­pu­ta­tion on it, then you drop it on a queue (or bus, or stream, or what­ev­er you want to call it) and for­get about it, it’s not your prob­lem any more. ¶

The next stage of re­quest han­dling is im­ple­ment­ed by ser­vices that read the queue and ei­ther route an an­swer back to the orig­i­nal re­quester or pass­es it on to an­oth­er ser­vice stage. Now for this to work, the queues in ques­tion have to be fast (which the­se, days, they are), scal­able (which they are), and very, very durable (which they are).

There are a lot of wins here: To start with, tran­sient query surges are no longer a prob­lem. Al­so, once you’ve got a mes­sage stream you can do fan-out and fil­ter­ing and as­sem­bly and sub­set­ting and all sorts of oth­er use­ful stuff, with­out dis­turb­ing the op­er­a­tions of the up­stream mes­sage source.

Source