More negative views about Rails

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com, or follow me on Twitter.

Rails lacks a story for concurrency. This is written by a Go programmer. Their criticisms are similar to mine, though for me the answer is “use Clojure” and so I end up doing JVM tuning, which is brought up as something scary to keep people away from jRuby. My impression is that the case against jRuby is weaker than the case against MRI Ruby (the C version).

Rails is fundamentally – and catastrophically – slow.

This well-known set of webapp benchmarks shows that, for a single-database-query workload, Rails offers less than 1/20th the throughput of, say, Go, 25 times the average latency of Go, and more than 6 times the latency variance of Go. And Go isn’t even the most responsive or highest-throughput of the available options. The story is even more alarming for elemental workloads that focus exclusively on the framework internals.

And it’s no wonder. Until Rails 4, the core framework didn’t enable concurrent request handling by default. And even in Rails 4, concurrent request handling is multiplexed on a single CPU core, leaving Rails unable to take advantage of the parallelism of modern architectures. Don’t mistake concurrency for parallelism: any web framework that cannot take advantage of multiple CPU cores will suffer greatly from a throughput standpoint.

At this point it would be reasonable to bring up JRuby. I should admit at this point that I’ve never used JRuby seriously, as the Rails apps I worked on had dependencies on C libraries which of course eliminated JRuby. Nevertheless, a few thoughts:

Deploying JRuby means entering the dark world of JVM tuning: one of the reasons people want to use frameworks like Rails is to get away from JVM tuning, so it’s unfortunate to drag it back into the picture.
That said, JRuby sometimes wildly outperforms cRuby on CPU benchmarks… yet it also sometimes wildly underperforms cRuby for those same benchmarks: read more here.

Although JRuby does allow for parallel processing of requests across CPUs, some gems often (unfortunately) still make the simplifying assumption that there’s only ever a single extant request in the Rails process, and thus they are not safe for use in a concurrent environment. One can avoid those gems once identified, but my experience was to find out about them the hard way. Hopefully Rails 4’s concurrent-by-default mode will put more pressure on gem maintainers, but this is a rocky place to start from. The Rails community hasn’t been thinking about concurrency (much less parallelism) for many years running, and it’s a difficult cultural corner to turn.

Independent of JRuby, the best way to make use of a modern multi-core server in Rails is to run a single instance [of Rails] on each CPU core. Then one factors any large caches out of those Rails processes (via memcached or similar) and – ideally – places them on the same machine in order to make proper use of local memory. (That said, it seems that most people run the cache servers “wherever”, which is obviously problematic from a latency perspective)

Compare that to a system design where the web framework can use all available CPU cores and use a single in-memory cache; latency is now reduced to the time it takes to grab a read lock and fetch from main memory. It’s also a simpler programming model since there are no potential cache-fetch RPC failures to contend with.

Post external references

  1. 1
    http://blog.bensigelman.org/post/57543889435/why-nobody-should-use-rails-for-anything-ever
Source