Why are people ignoring the problems with NodeJS?

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com

Two weeks ago I wrote “A surprising NodeJS failure mode: deterministic code becomes probabilistic under load“. Since then I’ve been reading up on NodeJS and learning more about its substantial failure modes. I’m left feeling very surprised at the success that NodeJS is having.

Just to give you a sense of what I mean, this is how The New Stack sums up the success of NodeJS:

Ready for a Long Term Node Relationship?

In just seven short years, Node.js has gone from scrappy, Linux-only JavaScript runtime environment to genuine web-dominating juggernaut. The platform, distributed by the Node.js Foundation, has five million downloads a month and runs under the hood of everything from cloud stacks and API engines to mobile websites — and even honest-to-god robots.

Node.js is also increasingly embraced by the enterprise and sharing economy sectors. Behemoth companies like Walmart, Netflix and the New York Times have based server-side operations on Node, and Uber and AirBnB depend on Node.js to match up customers and rides/lodging on the fly.

But NodeJS is fragile. StrongLoop goes into detail about how the main event loop can drag the app to a stop:

Here’s a fun fact: every function call that does CPU work also blocks. …

This example just takes the request’s body and parses it. So, it should work great until somebody POSTs a 15mb JSON file! Executing the JSON.parse() call on a 15mb JSON file took about 1.5 seconds. Stringyfying a JSON data structure of this size with JSON.stringify(json, null, 2) takes about 3 seconds.

You may be thinking, “Oh, 1.5 seconds, 3 seconds, that’s still pretty fast!” However, during this time the event loop is completely blocked and the Node process will not accept new connections or process ongoing requests. Even with smaller payloads at high concurrency, this clogging will still happen and affect performance.

And this exactly describes my API app:

Even with smaller payloads at high concurrency, this clogging will still happen and affect performance.

So if parsing my smaller payloads only takes 50 miliseconds, and the app is getting hit with a request ever 50 miliseconds, then the app is blocked nearly 100% of the time. This is the problem I’ve been struggling with. They talk about it as if this is a perfectly normal problem that everyone knew about:

This leaves me with a strong impression that I should be using a platform that has real threads, like the JVM.

NodeJs seems highly optimized for read-heavy websites. Nothing else, just that. One has to be careful, because once we (developers) start using a platform, we often assume it can be extended for a lot of different uses. And some platforms, such as the JVM, really can be extended in a lot of different ways. But there are very few cases where NodeJs is the right answer.

Source