When Nginx becomes the bottleneck

(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com, or follow me on Twitter.

This makes me sad. I love Nginx as a reverse proxy, so long as it is invisible and I never have to think about. Realizing that it, too, can be a source of problems really is discouraging.

47,135 connections in TIME_WAIT! Moreover, ss indicates that they are all closed connections. This suggests the server is burning through a large portion of the available port range, which implies that it is allocating a new port for each connection it’s handling. Tweaking the networking settings helped firefight the problem a bit, but the socket range was still getting saturated.

After some digging around, I uncovered some documentation about an upstream keepalive directive. The docs state:

Sets the maximum number of idle keepalive connections to upstream servers that are retained in the cache per one worker process

This is interesting. In theory, this will help minimise connection wastage by pumping requests down connections that have already been established and cached. Additionally, the documentation also states that the proxy_http_version directive should be set to “1.1” and the “Connection” header cleared. On further research, it’s clear this is a good idea since HTTP/1.1 optimises TCP connection usage much more efficiently than HTTP/1.0, which is the default in Nginx Proxy.

Post external references

  1. 1
    https://engineering.gosquared.com/optimising-nginx-node-js-and-networking-for-heavy-workloads
Source