November 19th, 2018
(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: email@example.com
Tim Bray has an interesting article about the future of REST. However, I fail to understand this sentence:
“To start with, transient query surges are no longer a problem”
Why? The external world can still send unexpected surges of traffic, yes?
Anyway, this is worth reading:
Post-REST: Messaging and Eventing · This approach is all over, and I mean all over, the cloud infrastructure that I work on. The idea is you get a request, you validate it, maybe you do some computation on it, then you drop it on a queue (or bus, or stream, or whatever you want to call it) and forget about it, it’s not your problem any more. ¶
The next stage of request handling is implemented by services that read the queue and either route an answer back to the original requester or passes it on to another service stage. Now for this to work, the queues in question have to be fast (which these, days, they are), scalable (which they are), and very, very durable (which they are).
There are a lot of wins here: To start with, transient query surges are no longer a problem. Also, once you’ve got a message stream you can do fan-out and filtering and assembly and subsetting and all sorts of other useful stuff, without disturbing the operations of the upstream message source.