Transducers in Javascript

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com, or follow me on Twitter.

Interesting:

function filterer(f) {
return function(combine) {
return function(result, x) {
return f(x) ? combine(result, x) : result;
}
}
}

arr.reduce(
filterer(function(x) { return x > 2; })(
mapper(function(x) { return x * 2; })(append)
),
[]
);
// -> [ 6, 8 ]

Now that we’ve decoupled the data that comes in, how it’s transformed, and what comes out, we have an insane amount of power. And with a pretty simple API as well.

Did you notice that last example with channels? That’s right, a js-csp channel which I introduced in my last post now can take a transducer to apply over each item that passes through the channel. This easily lets us do Rx-style (reactive) code by simple reusing all the same transformations.

A channel is basically just a stream. You can reuse all of your familiar transformations on streams. That’s huge!

This is possible because transducers work differently in that instead of applying each transformation to a collection one at a time (and creating multiple intermediate collections), they take each value separately and fire them through the whole transformation pipeline. That’s leads us to the next point, in which there are…

No Intermediate Allocations!

Not only do we have a super generic way of transforming data, we get good performance on large arrays. This is because transducers create no intermediate collections. If you want to apply several transformations, usually each one is performed in order, creating a new collection each time.

Transducers, however, take one item off the collection at a time and fire it through the whole transformation pipeline. So it doesn’t need any intermediate collections; each value runs through the pipeline separately.

Think of it as favoring a computational burden over a memory burden. Since each value runs through the pipeline, there are several function calls per item but no allocations, instead of 1 function call per item but an allocation per transformation. For small arrays there is a small difference, but for large arrays the computation burden easily wins out over the memory burden.

To be frank, early benchmarks show that this doesn’t win anything in V8 until you reach a size of around 100,000 items (after that this really wins out). So it only matters for very large arrays. It’s too early to post benchmarks.

Post external references

  1. 1
    http://jlongster.com/Transducers.js--A-JavaScript-Library-for-Transformation-of-Data
Source